WorldWideScience

Sample records for source observation database

  1. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  2. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  3. Source attribution using FLEXPART and carbon monoxide emission inventories for the IAGOS In-situ Observation database

    Science.gov (United States)

    Fontaine, Alain; Sauvage, Bastien; Pétetin, Hervé; Auby, Antoine; Boulanger, Damien; Thouret, Valerie

    2016-04-01

    Since 1994, the IAGOS program (In-Service Aircraft for a Global Observing System http://www.iagos.org) and its predecessor MOZAIC has produced in-situ measurements of the atmospheric composition during more than 46000 commercial aircraft flights. In order to help analyzing these observations and further understanding the processes driving their evolution, we developed a modelling tool SOFT-IO quantifying their source/receptor link. We improved the methodology used by Stohl et al. (2003), based on the FLEXPART plume dispersion model, to simulate the contributions of anthropogenic and biomass burning emissions from the ECCAD database (http://eccad.aeris-data.fr) to the measured carbon monoxide mixing ratio along each IAGOS flight. Thanks to automated processes, contributions are simulated for the last 20 days before observation, separating individual contributions from the different source regions. The main goal is to supply add-value products to the IAGOS database showing pollutants geographical origin and emission type. Using this information, it may be possible to link trends in the atmospheric composition to changes in the transport pathways and to the evolution of emissions. This tool could be used for statistical validation as well as for inter-comparisons of emission inventories using large amounts of data, as Lagrangian models are able to bring the global scale emissions down to a smaller scale, where they can be directly compared to the in-situ observations from the IAGOS database.

  4. The NASA Goddard Group's Source Monitoring Database and Program

    Science.gov (United States)

    Gipson, John; Le Bail, Karine; Ma, Chopo

    2014-12-01

    Beginning in 2003, the Goddard VLBI group developed a program to purposefully monitor when sources were observed and to increase the observations of ``under-observed'' sources. The heart of the program consists of a MySQL database that keeps track of, on a session-by-session basis: the number of observations that are scheduled for a source, the number of observations that are successfully correlated, and the number of observations that are used in a session. In addition, there is a table that contains the target number of successful sessions over the last twelve months. Initially this table just contained two categories. Sources in the geodetic catalog had a target of 12 sessions/year; the remaining ICRF-1 defining sources had a target of two sessions/year. All other sources did not have a specific target. As the program evolved, different kinds of sources with different observing targets were added. During the scheduling process, the scheduler has the option of automatically selecting N sources which have not met their target. We discuss the history and present some results of this successful program.

  5. Free software and open source databases

    Directory of Open Access Journals (Sweden)

    Napoleon Alexandru SIRITEANU

    2006-01-01

    Full Text Available The emergence of free/open source software -FS/OSS- enterprises seeks to push software development out of the academic stream into the commercial mainstream, and as a result, end-user applications such as open source database management systems (PostgreSQL, MySQL, Firebird are becoming more popular. Companies like Sybase, Oracle, Sun, IBM are increasingly implementing open source strategies and porting programs/applications into the Linux environment. Open source software is redefining the software industry in general and database development in particular.

  6. Cross-Matching Source Observations from the Palomar Transient Factory (PTF)

    Science.gov (United States)

    Laher, Russ; Grillmair, C.; Surace, J.; Monkewitz, S.; Jackson, E.

    2009-01-01

    Over the four-year lifetime of the PTF project, approximately 40 billion instances of astronomical-source observations will be extracted from the image data. The instances will correspond to the same astronomical objects being observed at roughly 25-50 different times, and so a very large catalog containing important object-variability information will be the chief PTF product. Organizing astronomical-source catalogs is conventionally done by dividing the catalog into declination zones and sorting by right ascension within each zone (e.g., the USNOA star catalog), in order to facilitate catalog searches. This method was reincarnated as the "zones" algorithm in a SQL-Server database implementation (Szalay et al., MSR-TR-2004-32), with corrections given by Gray et al. (MSR-TR-2006-52). The primary advantage of this implementation is that all of the work is done entirely on the database server and client/server communication is eliminated. We implemented the methods outlined in Gray et al. for a PostgreSQL database. We programmed the methods as database functions in PL/pgSQL procedural language. The cross-matching is currently based on source positions, but we intend to extend it to use both positions and positional uncertainties to form a chi-square statistic for optimal thresholding. The database design includes three main tables, plus a handful of internal tables. The Sources table stores the SExtractor source extractions taken at various times; the MergedSources table stores statistics about the astronomical objects, which are the result of cross-matching records in the Sources table; and the Merges table, which associates cross-matched primary keys in the Sources table with primary keys in the MergedSoures table. Besides judicious database indexing, we have also internally partitioned the Sources table by declination zone, in order to speed up the population of Sources records and make the database more manageable. The catalog will be accessible to the public

  7. Power source roadmaps using bibliometrics and database tomography

    International Nuclear Information System (INIS)

    Kostoff, R.N.; Tshiteya, R.; Pfeil, K.M.; Humenik, J.A.; Karypis, G.

    2005-01-01

    Database Tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multi-word phrase frequencies and phrase proximities (physical closeness of the multi-word technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a Power Sources database derived from the Science Citation Index. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the Power Sources database, and the phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the Power Sources literature supplemented the DT results with author/journal/institution/country publication and citation data

  8. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer....... The thesis discusses the VR-tree, an extension of the R-tree that enables observer relative data extraction. To support incremental observer position relative data extraction the thesis proposes the Volatile Access Structure (VAST). VAST is a main memory structure that caches nodes of the VR-tree. VAST...

  9. DEIMOS – an Open Source Image Database

    Directory of Open Access Journals (Sweden)

    M. Blazek

    2011-12-01

    Full Text Available The DEIMOS (DatabasE of Images: Open Source is created as an open-source database of images and videos for testing, verification and comparing of various image and/or video processing techniques such as enhancing, compression and reconstruction. The main advantage of DEIMOS is its orientation to various application fields – multimedia, television, security, assistive technology, biomedicine, astronomy etc. The DEIMOS is/will be created gradually step-by-step based upon the contributions of team members. The paper is describing basic parameters of DEIMOS database including application examples.

  10. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  11. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  12. A database of worldwide glacier thickness observations

    DEFF Research Database (Denmark)

    Gärtner-Roer, I.; Naegeli, K.; Huss, M.

    2014-01-01

    One of the grand challenges in glacier research is to assess the total ice volume and its global distribution. Over the past few decades the compilation of a world glacier inventory has been well-advanced both in institutional set-up and in spatial coverage. The inventory is restricted to glacier...... the different estimation approaches. This initial database of glacier and ice caps thickness will hopefully be further enlarged and intensively used for a better understanding of the global glacier ice volume and its distribution....... surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, we present a standardized database of glacier thickness observations compiled by an extensive literature...... review and from airborne data extracted from NASA's Operation IceBridge. This database contains ice thickness observations from roughly 1100 glaciers and ice caps including 550 glacier-wide estimates and 750,000 point observations. A comparison of these observational ice thicknesses with results from...

  13. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    Science.gov (United States)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  14. Observer and At Sea Monitor Database (OBDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Northeast Fisheries Observer Database System (OBDBS) contains data collected on commercial fishing vessels by observers from 1989 - present and at-sea monitors...

  15. Implementation of a database for the management of radioactive sources

    International Nuclear Information System (INIS)

    MOHAMAD, M.

    2012-01-01

    In Madagascar, the application of nuclear technology continues to develop. In order to protect the human health and his environment against the harmful effects of the ionizing radiation, each user of radioactive sources has to implement a program of nuclear security and safety and to declare their sources at Regulatory Authority. This Authority must have access to all the informations relating to all the sources and their uses. This work is based on the elaboration of a software using python as programming language and SQlite as database. It makes possible to computerize the radioactive sources management.This application unifies the various existing databases and centralizes the activities of the radioactive sources management.The objective is to follow the movement of each source in the Malagasy territory in order to avoid the risks related on the use of the radioactive sources and the illicit traffic. [fr

  16. Deep Sea Coral National Observation Database, Northeast Region

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The national database of deep sea coral observations. Northeast version 1.0. * This database was developed by the NOAA NOS NCCOS CCMA Biogeography office as part of...

  17. Archives of Astronomical Spectral Observations and Atomic/Molecular Databases for their Analysis

    Directory of Open Access Journals (Sweden)

    Ryabchikova T.

    2015-12-01

    Full Text Available We present a review of open-source data for stellar spectroscopy investigations. It includes lists of the main archives of medium-to-high resolution spectroscopic observations, with brief characteristics of the archive data (spectral range, resolving power, flux units. We also review atomic and molecular databases that contain parameters of spectral lines, cross-sections and reaction rates needed for a detailed analysis of high resolution, high signal-to-noise ratio stellar spectra.

  18. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database.

    Science.gov (United States)

    Scofield, Patricia A; Smith, Linda L; Johnson, David N

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.

  19. DABAM: an open-source database of X-ray mirrors metrology

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel, E-mail: srio@esrf.eu [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Bianchi, Davide [AC2T Research GmbH, Viktro-Kaplan-Strasse 2-C, 2700 Wiener Neustadt (Austria); Cocco, Daniele [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Glass, Mark [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Idir, Mourad [NSLS II, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Metz, Jim [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Raimondi, Lorenzo; Rebuffi, Luca [Elettra-Sincrotrone Trieste SCpA, Basovizza (TS) (Italy); Reininger, Ruben; Shi, Xianbo [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Siewert, Frank [BESSY II, Helmholtz Zentrum Berlin, Institute for Nanometre Optics and Technology, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Spielmann-Jaeggi, Sibylle [Swiss Light Source at Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Takacs, Peter [Instrumentation Division, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Tomasset, Muriel [Synchrotron Soleil (France); Tonnessen, Tom [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Vivo, Amparo [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Yashchuk, Valeriy [Advanced Light Source, Lawrence Berkeley National Laboratory, MS 15-R0317, 1 Cyclotron Road, Berkeley, CA 94720-8199 (United States)

    2016-04-20

    DABAM, an open-source database of X-ray mirrors metrology to be used with ray-tracing and wave-propagation codes for simulating the effect of the surface errors on the performance of a synchrotron radiation beamline. An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  20. Comparison of open source database systems(characteristics, limits of usage)

    OpenAIRE

    Husárik, Braňko

    2008-01-01

    The goal of this work is to compare some chosen open source database systems (Ingres, PostgreSQL, Firebird, Mysql). First part of work is focused on history and present situation of companies which are developing these products. Second part contains the comparision of certain group of specific features and limits. The benchmark of some operations is its own part. Possibilities of usage of mentioned database systems are summarized at the end of work.

  1. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    Science.gov (United States)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  2. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Directory of Open Access Journals (Sweden)

    Surasak Saokaew

    Full Text Available Health technology assessment (HTA has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced.Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided.Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources.Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  3. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Science.gov (United States)

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  4. Inferring pregnancy episodes and outcomes within a network of observational databases.

    Directory of Open Access Journals (Sweden)

    Amy Matcho

    Full Text Available Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE, Truven MarketScan® Multi-state Medicaid (MDCD, and the Optum ClinFormatics® (Optum database and one non-US database, the United Kingdom (UK based Clinical Practice Research Datalink (CPRD. Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm's Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99-100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95-100%, while start date agreement within seven days in either direction ranged from 90-97%. In Optum validation sensitivity analysis, a total of 73% of

  5. Inferring pregnancy episodes and outcomes within a network of observational databases

    Science.gov (United States)

    Ryan, Patrick; Fife, Daniel; Gifkins, Dina; Knoll, Chris; Friedman, Andrew

    2018-01-01

    Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE), Truven MarketScan® Multi-state Medicaid (MDCD), and the Optum ClinFormatics® (Optum) database and one non-US database, the United Kingdom (UK) based Clinical Practice Research Datalink (CPRD). Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm’s Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99–100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95–100%, while start date agreement within seven days in either direction ranged from 90–97%. In Optum validation sensitivity analysis, a total of 73% of

  6. Relative accuracy and availability of an Irish National Database of dispensed medication as a source of medication history information: observational study and retrospective record analysis.

    LENUS (Irish Health Repository)

    Grimes, T

    2013-01-27

    WHAT IS KNOWN AND OBJECTIVE: The medication reconciliation process begins by identifying which medicines a patient used before presentation to hospital. This is time-consuming, labour intensive and may involve interruption of clinicians. We sought to identify the availability and accuracy of data held in a national dispensing database, relative to other sources of medication history information. METHODS: For patients admitted to two acute hospitals in Ireland, a Gold Standard Pre-Admission Medication List (GSPAML) was identified and corroborated with the patient or carer. The GSPAML was compared for accuracy and availability to PAMLs from other sources, including the Health Service Executive Primary Care Reimbursement Scheme (HSE-PCRS) dispensing database. RESULTS: Some 1111 medication were assessed for 97 patients, who were median age 74 years (range 18-92 years), median four co-morbidities (range 1-9), used median 10 medications (range 3-25) and half (52%) were male. The HSE-PCRS PAML was the most accurate source compared to lists provided by the general practitioner, community pharmacist or cited in previous hospital documentation: the list agreed for 74% of the medications the patients actually used, representing complete agreement for all medications in 17% of patients. It was equally contemporaneous to other sources, but was less reliable for male than female patients, those using increasing numbers of medications and those using one or more item that was not reimbursable by the HSE. WHAT IS NEW AND CONCLUSION: The HSE-PCRS database is a relatively accurate, available and contemporaneous source of medication history information and could support acute hospital medication reconciliation.

  7. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  8. COPEPOD: The Coastal & Oceanic Plankton Ecology, Production, & Observation Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal & Oceanic Plankton Ecology, Production, & Observation Database (COPEPOD) provides NMFS scientists with quality-controlled, globally distributed...

  9. Summary of Adsorption/Desorption Experiments for the European Database on Indoor Air Pollution Sources in Buildings

    DEFF Research Database (Denmark)

    Kjær, Ulla Dorte; Tirkkonen, T.

    1996-01-01

    Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings.......Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings....

  10. IPCC Fourth Assessment Report (AR4) Observed Climate Change Impacts Database

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessement Report (AR4) Observed Climate Change Impacts Database contains observed responses to climate...

  11. ZeBase: an open-source relational database for zebrafish laboratories.

    Science.gov (United States)

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  12. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    Science.gov (United States)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via

  13. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  14. A MiniReview of the Use of Hospital-based Databases in Observational Inpatient Studies of Drugs

    DEFF Research Database (Denmark)

    Larsen, Michael Due; Cars, Thomas; Hallas, Jesper

    2013-01-01

    inpatient databases in Asia, the United States and Europe were found. Most databases were automatically collected from claims data or generated from electronic medical records. The contents of the databases varied as well as the potential for linkage with other data sources such as laboratory and outpatient...

  15. Scalable Earth-observation Analytics for Geoscientists: Spacetime Extensions to the Array Database SciDB

    Science.gov (United States)

    Appel, Marius; Lahn, Florian; Pebesma, Edzer; Buytaert, Wouter; Moulds, Simon

    2016-04-01

    Today's amount of freely available data requires scientists to spend large parts of their work on data management. This is especially true in environmental sciences when working with large remote sensing datasets, such as obtained from earth-observation satellites like the Sentinel fleet. Many frameworks like SpatialHadoop or Apache Spark address the scalability but target programmers rather than data analysts, and are not dedicated to imagery or array data. In this work, we use the open-source data management and analytics system SciDB to bring large earth-observation datasets closer to analysts. Its underlying data representation as multidimensional arrays fits naturally to earth-observation datasets, distributes storage and computational load over multiple instances by multidimensional chunking, and also enables efficient time-series based analyses, which is usually difficult using file- or tile-based approaches. Existing interfaces to R and Python furthermore allow for scalable analytics with relatively little learning effort. However, interfacing SciDB and file-based earth-observation datasets that come as tiled temporal snapshots requires a lot of manual bookkeeping during ingestion, and SciDB natively only supports loading data from CSV-like and custom binary formatted files, which currently limits its practical use in earth-observation analytics. To make it easier to work with large multi-temporal datasets in SciDB, we developed software tools that enrich SciDB with earth observation metadata and allow working with commonly used file formats: (i) the SciDB extension library scidb4geo simplifies working with spatiotemporal arrays by adding relevant metadata to the database and (ii) the Geospatial Data Abstraction Library (GDAL) driver implementation scidb4gdal allows to ingest and export remote sensing imagery from and to a large number of file formats. Using added metadata on temporal resolution and coverage, the GDAL driver supports time-based ingestion of

  16. Conversion of National Health Insurance Service-National Sample Cohort (NHIS-NSC) Database into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM).

    Science.gov (United States)

    You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.

  17. Reliability databases: State-of-the-art and perspectives

    DEFF Research Database (Denmark)

    Akhmedjanov, Farit

    2001-01-01

    The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models...... and software packages containing the data mentioned. The standards corresponding to collection and exchange of reliability data are observed too. Finally, perspective directions in such data sources development areshown....

  18. Kajian Unified Theory of Acceptance and Use of Technology Dalam Penggunaan Open Source Software Database Management System

    Directory of Open Access Journals (Sweden)

    Michael Sonny

    2016-06-01

    Full Text Available Perkembangan perangkat lunak computer dewasa ini terjadi sedemikian pesatnya, perkembangan tidak hanya terjadi pada perangkat lunak yang memiliki lisensi tertentu, perangkat open source pun demikian. Perkembangan itu tentu saja sangat menggembirakan bagi pengguna computer khususnya di kalangan pendidikan maupun di kalangan mahasiswa, karena pengguna mempunyai beberapa pilihan untuk menggunakan aplikasi. Perangkat lunak open source juga menawarkan produk yang umumnya gratis, diberikan kode programnya, kebebasan untuk modifikasi dan mengembangkan. Meneliti aplikasi berbasis open source tentu saja sangat beragam seperti aplikasi untuk pemrograman (PHP, Gambas, Database Management System (MySql, SQLite, browsing (Mozilla, Firefox, Opera. Pada penelitian ini di kaji penerimaan aplikasi DBMS (Database Management System seperti MySql dan SQLite dengan menggunakan sebuah model yang dikembangkan oleh Venkantes(2003 yaitu UTAUT (Unified Theory of Acceptance and Use of Technology. Faktor – faktor tertentu juga mempengaruhi dalam melakukan kegiatan pembelajaran aplikasi open source ini, salah satu faktor atau yang disebut dengan moderating yang bisa mempengaruhi efektifitas dan efisiensi. Dengan demikian akan mendapatkan hasil yang bisa membuat kelancaran dalam pembelajaran aplikasi berbasis open source ini.   Kata kunci— open source, Database Management System (DBMS, Modereting

  19. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  20. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  1. A New Global Open Source Marine Hydrocarbon Emission Site Database

    Science.gov (United States)

    Onyia, E., Jr.; Wood, W. T.; Barnard, A.; Dada, T.; Qazzaz, M.; Lee, T. R.; Herrera, E.; Sager, W.

    2017-12-01

    Hydrocarbon emission sites (e.g. seeps) discharge large volumes of fluids and gases into the oceans that are not only important for biogeochemical budgets, but also support abundant chemosynthetic communities. Documenting the locations of modern emissions is a first step towards understanding and monitoring how they affect the global state of the seafloor and oceans. Currently, no global open source (i.e. non-proprietry) detailed maps of emissions sites are available. As a solution, we have created a database that is housed within an Excel spreadsheet and use the latest versions of Earthpoint and Google Earth for position coordinate conversions and data mapping, respectively. To date, approximately 1,000 data points have been collected from referenceable sources across the globe, and we are continualy expanding the dataset. Due to the variety of spatial extents encountered, to identify each site we used two different methods: 1) point (x, y, z) locations for individual sites and; 2) delineation of areas where sites are clustered. Certain well-known areas, such as the Gulf of Mexico and the Mediterranean Sea, have a greater abundance of information; whereas significantly less information is available in other regions due to the absence of emission sites, lack of data, or because the existing data is proprietary. Although the geographical extent of the data is currently restricted to regions where the most data is publicly available, as the database matures, we expect to have more complete coverage of the world's oceans. This database is an information resource that consolidates and organizes the existing literature on hydrocarbons released into the marine environment, thereby providing a comprehensive reference for future work. We expect that the availability of seafloor hydrocarbon emission maps will benefit scientific understanding of hydrocarbon rich areas as well as potentially aiding hydrocarbon exploration and environmental impact assessements.

  2. jSPyDB, an open source database-independent tool for data management

    CERN Document Server

    Pierro, Giuseppe Antonio

    2010-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different Database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. ...

  3. Performance of popular open source databases for HEP related computing problems

    International Nuclear Information System (INIS)

    Kovalskyi, D; Sfiligoi, I; Wuerthwein, F; Yagil, A

    2014-01-01

    Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.

  4. Observational database for studies of nearby universe

    Science.gov (United States)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  5. Opinions on Drug Interaction Sources in Anticancer Treatments and Parameters for an Oncology-Specific Database by Pharmacy Practitioners in Asia

    Directory of Open Access Journals (Sweden)

    2010-01-01

    Full Text Available Cancer patients undergoing chemotherapy are particularly susceptible to drug-drug interactions (DDIs. Practitioners should keep themselves updated with the most current DDI information, particularly involving new anticancer drugs (ACDs. Databases can be useful to obtain up-to-date DDI information in a timely and efficient manner. Our objective was to investigate the DDI information sources of pharmacy practitioners in Asia and their views on the usefulness of an oncology-specific database for ACD interactions. A qualitative, cross-sectional survey was done to collect information on the respondents' practice characteristics, sources of DDI information and parameters useful in an ACD interaction database. Response rate was 49%. Electronic databases (70%, drug interaction textbooks (69% and drug compendia (64% were most commonly used. Majority (93% indicated that a database catering towards ACD interactions was useful. Essential parameters that should be included in the database were the mechanism and severity of the detected interaction, and the presence of a management plan (98% each. This study has improved our understanding on the usefulness of various DDI information sources for ACD interactions among pharmacy practitioners in Asia. An oncology-specific DDI database targeting ACD interactions is definitely attractive for clinical practice.

  6. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  7. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  8. VLBI observations of Infrared-Faint Radio Sources

    Science.gov (United States)

    Middelberg, Enno; Phillips, Chris; Norris, Ray; Tingay, Steven

    2006-10-01

    We propose to observe a small sample of radio sources from the ATLAS project (ATLAS = Australia Telescope Large Area Survey) with the LBA, to determine their compactness and map their structures. The sample consists of three radio sources with no counterpart in the co-located SWIRE survey (3.6 um to 160 um), carried out with the Spitzer Space Telescope. This rare class of sources, dubbed Infrared-Faint Radio Sources, or IFRS, is inconsistent with current galaxy evolution models. VLBI observations are an essential way to obtain further clues on what these objects are and why they are hidden from infrared observations: we will map their structure to test whether they resemble core-jet or double-lobed morphologies, and we will measure the flux densities on long baselines, to determine their compactness. Previous snapshot-style LBA observations of two other IFRS yielded no detections, hence we propose to use disk-based recording with 512 Mbps where possible, for highest sensitivity. With the observations proposed here, we will increase the number of VLBI-observed IFRS from two to five, soon allowing us to draw general conclusions about this intriguing new class of objects.

  9. The plant phenological online database (PPODB): an online database for long-term phenological data

    Science.gov (United States)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  10. Observation of extragalactic X-ray sources

    International Nuclear Information System (INIS)

    Bui-Van, Andre.

    1973-01-01

    A narrow angular resolution detection apparatus using a high performance collimator has proved particularly well suited for the programs of observation of X ray sources. The experimental set-up and its performance are described. One chapter deals with the particular problems involved in the observation of X ray sources with the aid of sounding balloons. The absorption of extraterrestrial photons by the earth atmosphere is taken into account in the procesing of the observation data using two methods of calculation: digital and with simulation techniques. The results of three balloon flights are then presented with the interpretation of the observations carried out using both thermal and non thermal emission models. This analysis leads to some possible characteristics of structure of the Perseus galaxy cluster [fr

  11. jSPyDB, an open source database-independent tool for data management

    Science.gov (United States)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  12. jSPyDB, an open source database-independent tool for data management

    International Nuclear Information System (INIS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  13. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  14. The Chandra Source Catalog : Automated Source Correlation

    Science.gov (United States)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  15. Optical observations of binary X-ray sources

    International Nuclear Information System (INIS)

    Boynton, P.E.

    1975-01-01

    The contribution to the recent progress in astronomy made by optical observations is pointed out. The optical properties of X-ray sources help to establish the physical nature of these objects. The current observational evidence on the binary X-ray sources HZ Her/Her X-1 and HDE 226868/Cyg X-1 is reported. (P.J.S.)

  16. The RUNE Experiment—A Database of Remote-Sensing Observations of Near-Shore Winds

    DEFF Research Database (Denmark)

    Floors, Rogier Ralph; Peña, Alfredo; Lea, Guillaume

    2016-01-01

    We present a comprehensive database of near-shore wind observations that were carried out during the experimental campaign of the RUNE project. RUNE aims at reducing the uncertainty of the near-shore wind resource estimates from model outputs by using lidar, ocean, and satellite observations. Here...

  17. The Einstein database of IPC x-ray observations of optically selected and radio-selected quasars, 1.

    Science.gov (United States)

    Wilkes, Belinda J.; Tananbaum, Harvey; Worrall, D. M.; Avni, Yoram; Oey, M. S.; Flanagan, Joan

    1994-01-01

    We present the first volume of the Einstein quasar database. The database includes estimates of the X-ray count rates, fluxes, and luminosities for 514 quasars and Seyfert 1 galaxies observed with the Imaging Proportional Counter (IPC) aboard the Einstein Observatory. All were previously known optically selected or radio-selected objects, and most were the targets of the X-ray observations. The X-ray properties of the Active Galactic Nuclei (AGNs) have been derived by reanalyzing the IPC data in a systematic manner to provide a uniform database for general use by the astronomical community. We use the database to extend earlier quasar luminosity studies which were made using only a subset of the currently available data. The database can be accessed on internet via the SAO Einstein on-line system ('Einline') and is available in ASCII format on magnetic tape and DOS diskette.

  18. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  19. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  20. Data Sources for Trait Databases: Comparing the Phenomic Content of Monographs and Evolutionary Matrices.

    Science.gov (United States)

    Dececchi, T Alex; Mabee, Paula M; Blackburn, David C

    2016-01-01

    Databases of organismal traits that aggregate information from one or multiple sources can be leveraged for large-scale analyses in biology. Yet the differences among these data streams and how well they capture trait diversity have never been explored. We present the first analysis of the differences between phenotypes captured in free text of descriptive publications ('monographs') and those used in phylogenetic analyses ('matrices'). We focus our analysis on osteological phenotypes of the limbs of four extinct vertebrate taxa critical to our understanding of the fin-to-limb transition. We find that there is low overlap between the anatomical entities used in these two sources of phenotype data, indicating that phenotypes represented in matrices are not simply a subset of those found in monographic descriptions. Perhaps as expected, compared to characters found in matrices, phenotypes in monographs tend to emphasize descriptive and positional morphology, be somewhat more complex, and relate to fewer additional taxa. While based on a small set of focal taxa, these qualitative and quantitative data suggest that either source of phenotypes alone will result in incomplete knowledge of variation for a given taxon. As a broader community develops to use and expand databases characterizing organismal trait diversity, it is important to recognize the limitations of the data sources and develop strategies to more fully characterize variation both within species and across the tree of life.

  1. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  2. Developing an Inhouse Database from Online Sources.

    Science.gov (United States)

    Smith-Cohen, Deborah

    1993-01-01

    Describes the development of an in-house bibliographic database by the U.S. Army Corp of Engineers Cold Regions Research and Engineering Laboratory on arctic wetlands research. Topics discussed include planning; identifying relevant search terms and commercial online databases; downloading citations; criteria for software selection; management…

  3. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  4. Aurorasaurus Database of Real-Time, Soft-Sensor Sourced Aurora Data for Space Weather Research

    Science.gov (United States)

    Kosar, B.; MacDonald, E.; Heavner, M.

    2017-12-01

    Aurorasaurus is an innovative citizen science project focused on two fundamental objectives i.e., collecting real-time, ground-based signals of auroral visibility from citizen scientists (soft-sensors) and incorporating this new type of data into scientific investigations pertaining to aurora. The project has been live since the Fall of 2014, and as of Summer 2017, the database compiled approximately 12,000 observations (5295 direct reports and 6413 verified tweets). In this presentation, we will focus on demonstrating the utility of this robust science quality data for space weather research needs. These data scale with the size of the event and are well-suited to capture the largest, rarest events. Emerging state-of-the-art computational methods based on statistical inference such as machine learning frameworks and data-model integration methods can offer new insights that could potentially lead to better real-time assessment and space weather prediction when citizen science data are combined with traditional sources.

  5. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov (United States)

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users to objectively review and compare analysis results that are based on similar source of critically reviewed LCI data through its LCI Database Project. NREL's High-Performance

  6. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  7. Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.

    Science.gov (United States)

    Arcas, D.; Wei, Y.

    2017-12-01

    Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.

  8. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    Science.gov (United States)

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  9. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  10. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    Science.gov (United States)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  11. Linear polarization observations of some X-ray sources

    International Nuclear Information System (INIS)

    Shakhovskoy, N.M.; Efimov, Yu.S.

    1975-01-01

    Multicolour linear polarization of optical radiation of the X-ray sources Sco X-1, Cyg X-2, Cyg X-1 and Her X-1 was measured at the Crimean Astrophysical Observatory in 1970-1973. These observations indicate that polarization of Sco X-1 in the ultraviolet, blue and red spectral regions appears to be variable. No statistically significant variations of polarization were found for the other three sources observed. (Auth.)

  12. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  13. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    Science.gov (United States)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with

  14. Full Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen

  15. Free and Open Source Options for Creating Database-Driven Subject Guides

    Directory of Open Access Journals (Sweden)

    Edward M. Corrado

    2008-03-01

    Full Text Available This article reviews available cost-effective options libraries have for updating and maintaining pathfinders such as subject guides and course pages. The paper discusses many of the available options, from the standpoint of a mid-sized academic library which is evaluating alternatives to static-HTML subject guides. Static HTML guides, while useful, have proven difficult and time-consuming to maintain. The article includes a discussion of open source database-driven solutions (such as SubjectsPlus, LibData, Research Guide, and Library Course Builder, Wikis, and social tagging sites like del.icio.us. This article discusses both the functionality and the relative strengths and weaknessess of each of these options.

  16. Some observational aspects of compact galactic X-ray sources

    International Nuclear Information System (INIS)

    Heise, J.

    1982-01-01

    This thesis contains the following observations of compact galactic X-ray sources: i) the X-ray experiments onboard the Astronomical Netherlands Satellite ANS, ii) a rocket-borne ultra soft X-ray experiment and iii) the Objective Grating Spectrometer onboard the EINSTEIN observatory. In Chapter I the various types of compact galactic X-ray sources are reviewed and put into the perspective of earlier and following observations. In Chapter II the author presents some of the observations of high luminosity X-ray sources, made with ANS, including the detection of soft X-rays from the compact X-ray binary Hercules X-1 and the ''return to the high state'' of the black hole candidate Cygnus X-1. Chapter III deals with transient X-ray phenomena. Results on low luminosity galactic X-ray sources are collected in Chapter IV. (Auth.)

  17. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML Database – Xindice

    Directory of Open Access Journals (Sweden)

    Li Jianling

    2006-01-01

    Full Text Available Abstract Background Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. Results The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Conclusion Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  18. A high-energy nuclear database proposal

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.; UC Davis, CA

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from the Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  19. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  20. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    Science.gov (United States)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  1. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, David A.; Vogt, Ramona

    2005-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac and AGS to RHIC to CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews

  2. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    Science.gov (United States)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  3. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  4. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  5. Databases of Publications and Observations as a Part of the Crimean Astronomical Virtual Observatory

    Directory of Open Access Journals (Sweden)

    Shlyapnikov A.

    2015-12-01

    Full Text Available We describe the main principles of formation of databases (DBs with information about astronomical objects and their physical characteristics derived from observations obtained at the Crimean Astrophysical Observatory (CrAO and published in the “Izvestiya of the CrAO” and elsewhere. Emphasis is placed on the DBs missing from the most complete global library of catalogs and data tables, VizieR (supported by the Center of Astronomical Data, Strasbourg. We specially consider the problem of forming a digital archive of observational data obtained at the CrAO as an interactive DB related to database objects and publications. We present examples of all our DBs as elements integrated into the Crimean Astronomical Virtual Observatory. We illustrate the work with the CrAO DBs using tools of the International Virtual Observatory: Aladin, VOPlot, VOSpec, in conjunction with the VizieR and Simbad DBs.

  6. Estimation of Source and Attenuation Parameters from Ground Motion Observations for Induced Seismicity in Alberta

    Science.gov (United States)

    Novakovic, M.; Atkinson, G. M.

    2015-12-01

    We use a generalized inversion to solve for site response, regional source and attenuation parameters, in order to define a region-specific ground-motion prediction equation (GMPE) from ground motion observations in Alberta, following the method of Atkinson et al. (2015 BSSA). The database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at ~50 regional stations (distances from 30 to 500 km), over the last few years; almost all of the events have been identified as being induced by oil and gas activity. We remove magnitude scaling and geometric spreading functions from observed ground motions and invert for stress parameter, regional attenuation and site amplification. Resolving these parameters allows for the derivation of a regionally-calibrated GMPE that can be used to accurately predict amplitudes across the region in real time, which is useful for ground-motion-based alerting systems and traffic light protocols. The derived GMPE has further applications for the evaluation of hazards from induced seismicity.

  7. astroplan: An Open Source Observation Planning Package in Python

    Science.gov (United States)

    Morris, Brett M.; Tollerud, Erik; Sipőcz, Brigitta; Deil, Christoph; Douglas, Stephanie T.; Berlanga Medina, Jazmin; Vyhmeister, Karl; Smith, Toby R.; Littlefair, Stuart; Price-Whelan, Adrian M.; Gee, Wilfred T.; Jeschke, Eric

    2018-03-01

    We present astroplan—an open source, open development, Astropy affiliated package for ground-based observation planning and scheduling in Python. astroplan is designed to provide efficient access to common observational quantities such as celestial rise, set, and meridian transit times and simple transformations from sky coordinates to altitude-azimuth coordinates without requiring a detailed understanding of astropy’s implementation of coordinate systems. astroplan provides convenience functions to generate common observational plots such as airmass and parallactic angle as a function of time, along with basic sky (finder) charts. Users can determine whether or not a target is observable given a variety of observing constraints, such as airmass limits, time ranges, Moon illumination/separation ranges, and more. A selection of observation schedulers are included that divide observing time among a list of targets, given observing constraints on those targets. Contributions to the source code from the community are welcome.

  8. Can earthquake source inversion benefit from rotational ground motion observations?

    Science.gov (United States)

    Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.

    2015-12-01

    With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.

  9. [Stroke mortality in Poland--role of observational studies based on computer databases].

    Science.gov (United States)

    Mazurek, Maciej

    2005-01-01

    Stroke is a leading cause of death worldwide and remains one of the major public health problems. Most European countries have experienced declines in stroke mortality in contrast to central and eastern European countries including Poland. The World Health Organization Data Bank is an invaluable source of information especially for mortality trends. Stroke mortality in Poland and some problems with accuracy of ICD coding for the identification of patients with acute stroke are discussed. Computerized databases are increasingly being used to identify patients with acute stroke for epidemiological, quality of care, and cost studies. More accurate methods of collecting and analysis of the data should be implemented to gain more information from these bases.

  10. MILAGRO OBSERVATIONS OF MULTI-TeV EMISSION FROM GALACTIC SOURCES IN THE FERMI BRIGHT SOURCE LIST

    International Nuclear Information System (INIS)

    Abdo, A. A.; Linnemann, J. T.; Allen, B. T.; Chen, C.; Aune, T.; Berley, D.; Goodman, J. A.; Christopher, G. E.; Kolterman, B. E.; Mincer, A. I.; Nemethy, P.; DeYoung, T.; Dingus, B. L.; Hoffman, C. M.; Ellsworth, R. W.; Gonzalez, M. M.; Hays, E.; McEnery, J. E.; Huentemeyer, P. H.; Morgan, T.

    2009-01-01

    We present the result of a search of the Milagro sky map for spatial correlations with sources from a subset of the recent Fermi Bright Source List (BSL). The BSL consists of the 205 most significant sources detected above 100 MeV by the Fermi Large Area Telescope. We select sources based on their categorization in the BSL, taking all confirmed or possible Galactic sources in the field of view of Milagro. Of the 34 Fermi sources selected, 14 are observed by Milagro at a significance of 3 standard deviations or more. We conduct this search with a new analysis which employs newly optimized gamma-hadron separation and utilizes the full eight-year Milagro data set. Milagro is sensitive to gamma rays with energy from 1 to 100 TeV with a peak sensitivity from 10 to 50 TeV depending on the source spectrum and declination. These results extend the observation of these sources far above the Fermi energy band. With the new analysis and additional data, multi-TeV emission is definitively observed associated with the Fermi pulsar, J2229.0+6114, in the Boomerang pulsar wind nebula (PWN). Furthermore, an extended region of multi-TeV emission is associated with the Fermi pulsar, J0634.0+1745, the Geminga pulsar.

  11. Locating industrial VOC sources with aircraft observations

    International Nuclear Information System (INIS)

    Toscano, P.; Gioli, B.; Dugheri, S.; Salvini, A.; Matese, A.; Bonacchi, A.; Zaldei, A.; Cupelli, V.; Miglietta, F.

    2011-01-01

    Observation and characterization of environmental pollution, focussing on Volatile Organic Compounds (VOCs), in a high-risk industrial area, are particularly important in order to provide indications on a safe level of exposure, indicate eventual priorities and advise on policy interventions. The aim of this study is to use the Solid Phase Micro Extraction (SPME) method to measure VOCs, directly coupled with atmospheric measurements taken on a small aircraft environmental platform, to evaluate and locate the presence of VOC emission sources in the Marghera industrial area. Lab analysis of collected SPME fibres and subsequent analysis of mass spectrum and chromatograms in Scan Mode allowed the detection of a wide range of VOCs. The combination of this information during the monitoring campaign allowed a model (Gaussian Plume) to be implemented that estimates the localization of emission sources on the ground. - Highlights: → Flight plan aimed at sampling industrial area at various altitudes and locations. → SPME sampling strategy was based on plume detection by means of CO 2 . → Concentrations obtained were lower than the limit values or below the detection limit. → Scan mode highlighted presence of γ-butyrolactone (GBL) compound. → Gaussian dispersion modelling was used to estimate GBL source location and strength. - An integrated strategy based on atmospheric aircraft observations and dispersion modelling was developed, aimed at estimating spatial location and strength of VOC point source emissions in industrial areas.

  12. Inner Source Pickup Ions Observed by Ulysses

    Science.gov (United States)

    Gloeckler, G.

    2016-12-01

    The existence of an inner source of pickup ions close to the Sun was proposed in order to explain the unexpected discovery of C+ in the high-speed polar solar wind. Here I report on detailed analyses of the composition and the radial and latitudinal variations of inner source pickup ions measured with the Solar Wind Ion Composition Spectrometer on Ulysses from 1991 to 1998, approaching and during solar minimum. We find that the C+ intensity drops off with radial distance R as R-1.53, peaks at mid latitudes and drops to its lowest value in the ecliptic. Not only was C+ observed, but also N+, O+, Ne+, Na+, Mg+, Ar+, S+, K+, CH+, NH+, OH+, H2O+, H3O+, MgH+, HCN+, C2H4+, SO+ and many other singly-charged heavy ions and molecular ions. The measured velocity distributions of inner source pickup C+ and O+ indicate that these inner source pickup ions are most likely produced by charge exchange, photoionization and electron impact ionization of neutrals close to the Sun (within 10 to 30 solar radii). Possible causes for the unexpected latitudinal variations and the neutral source(s) producing the inner source pickup ions as well as plausible production mechanisms for inner source pickup ions will be discussed.

  13. Proposal for a high-energy nuclear database

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  14. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, D A; Vogt, R

    2005-01-01

    The authors propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, they propose periodically performing evaluations of the data and summarizing the results in topical reviews

  15. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    Science.gov (United States)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  16. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  17. Validating the extract, transform, load process used to populate a large clinical research database.

    Science.gov (United States)

    Denney, Michael J; Long, Dustin M; Armistead, Matthew G; Anderson, Jamie L; Conway, Baqiyyah N

    2016-10-01

    Informaticians at any institution that are developing clinical research support infrastructure are tasked with populating research databases with data extracted and transformed from their institution's operational databases, such as electronic health records (EHRs). These data must be properly extracted from these source systems, transformed into a standard data structure, and then loaded into the data warehouse while maintaining the integrity of these data. We validated the correctness of the extract, load, and transform (ETL) process of the extracted data of West Virginia Clinical and Translational Science Institute's Integrated Data Repository, a clinical data warehouse that includes data extracted from two EHR systems. Four hundred ninety-eight observations were randomly selected from the integrated data repository and compared with the two source EHR systems. Of the 498 observations, there were 479 concordant and 19 discordant observations. The discordant observations fell into three general categories: a) design decision differences between the IDR and source EHRs, b) timing differences, and c) user interface settings. After resolving apparent discordances, our integrated data repository was found to be 100% accurate relative to its source EHR systems. Any institution that uses a clinical data warehouse that is developed based on extraction processes from operational databases, such as EHRs, employs some form of an ETL process. As secondary use of EHR data begins to transform the research landscape, the importance of the basic validation of the extracted EHR data cannot be underestimated and should start with the validation of the extraction process itself. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Development of the Global Earthquake Model’s neotectonic fault database

    Science.gov (United States)

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  19. MERLIN observations of steep-spectrum radio sources at 6 cm

    International Nuclear Information System (INIS)

    Akujor, C.E.; Zhang, F.J.; Fanti, C.

    1991-01-01

    We present high-resolution observations of steep-spectrum radio sources made with MERLIN at 5 GHz. Thirty-one objects, comprising 11 quasars and 20 galaxies, most of them being 'Compact Steep-Spectrum' sources (CSSs), have been mapped with resolutions from 80 to 150 mas. This completes the current series of observations of CSS sources made with MERLIN at 5 GHz. We find that the majority of the quasars have complex structures, while galaxies tend to have double or triple structures, consistent with other recent studies of CSSs. (author)

  20. The Bologna complete sample of nearby radio sources. II. Phase referenced observations of faint nuclear sources

    Science.gov (United States)

    Liuzzo, E.; Giovannini, G.; Giroletti, M.; Taylor, G. B.

    2009-10-01

    Aims: To study statistical properties of different classes of sources, it is necessary to observe a sample that is free of selection effects. To do this, we initiated a project to observe a complete sample of radio galaxies selected from the B2 Catalogue of Radio Sources and the Third Cambridge Revised Catalogue (3CR), with no selection constraint on the nuclear properties. We named this sample “the Bologna Complete Sample” (BCS). Methods: We present new VLBI observations at 5 and 1.6 GHz for 33 sources drawn from a sample not biased toward orientation. By combining these data with those in the literature, information on the parsec-scale morphology is available for a total of 76 of 94 radio sources with a range in radio power and kiloparsec-scale morphologies. Results: The fraction of two-sided sources at milliarcsecond resolution is high (30%), compared to the fraction found in VLBI surveys selected at centimeter wavelengths, as expected from the predictions of unified models. The parsec-scale jets are generally found to be straight and to line up with the kiloparsec-scale jets. A few peculiar sources are discussed in detail. Tables 1-4 are only available in electronic form at http://www.aanda.org

  1. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  2. International patent analysis of water source heat pump based on orbit database

    Science.gov (United States)

    Li, Na

    2018-02-01

    Using orbit database, this paper analysed the international patents of water source heat pump (WSHP) industry with patent analysis methods such as analysis of publication tendency, geographical distribution, technology leaders and top assignees. It is found that the beginning of the 21st century is a period of rapid growth of the patent application of WSHP. Germany and the United States had done researches and development of WSHP in an early time, but now Japan and China have become important countries of patent applications. China has been developing faster and faster in recent years, but the patents are concentrated in universities and urgent to be transferred. Through an objective analysis, this paper aims to provide appropriate decision references for the development of domestic WSHP industry.

  3. THE SOURCE STRUCTURE OF 0642+449 DETECTED FROM THE CONT14 OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ming H.; Wang, Guang L. [Shanghai Astronomical Observatory, Chinese Academy of Sciences, No. 80 Nandan Raod, 200030, Shanghai (China); Heinkelmann, Robert; Anderson, James M.; Mora-Diaz, Julian; Schuh, Harald, E-mail: mhxu@shao.ac.cn [DeutschesGeoForschungsZentrum (GFZ), Potsdam, Telegrafenberg, D-14473 Potsdam (Germany)

    2016-11-01

    The CONT14 campaign with state-of-the-art very long baseline interferometry (VLBI) data has observed the source 0642+449 with about 1000 observables each day during a continuous observing period of 15 days, providing tens of thousands of closure delays—the sum of the delays around a closed loop of baselines. The closure delay is independent of the instrumental and propagation delays and provides valuable additional information about the source structure. We demonstrate the use of this new “observable” for the determination of the structure in the radio source 0642+449. This source, as one of the defining sources in the second realization of the International Celestial Reference Frame, is found to have two point-like components with a relative position offset of −426 microarcseconds ( μ as) in R.A. and −66 μ as in decl. The two components are almost equally bright, with a flux-density ratio of 0.92. The standard deviation of closure delays for source 0642+449 was reduced from 139 to 90 ps by using this two-component model. Closure delays larger than 1 ns are found to be related to the source structure, demonstrating that structure effects for a source with this simple structure could be up to tens of nanoseconds. The method described in this paper does not rely on a priori source structure information, such as knowledge of source structure determined from direct (Fourier) imaging of the same observations or observations at other epochs. We anticipate our study to be a starting point for more effective determination of the structure effect in VLBI observations.

  4. THE SOURCE STRUCTURE OF 0642+449 DETECTED FROM THE CONT14 OBSERVATIONS

    International Nuclear Information System (INIS)

    Xu, Ming H.; Wang, Guang L.; Heinkelmann, Robert; Anderson, James M.; Mora-Diaz, Julian; Schuh, Harald

    2016-01-01

    The CONT14 campaign with state-of-the-art very long baseline interferometry (VLBI) data has observed the source 0642+449 with about 1000 observables each day during a continuous observing period of 15 days, providing tens of thousands of closure delays—the sum of the delays around a closed loop of baselines. The closure delay is independent of the instrumental and propagation delays and provides valuable additional information about the source structure. We demonstrate the use of this new “observable” for the determination of the structure in the radio source 0642+449. This source, as one of the defining sources in the second realization of the International Celestial Reference Frame, is found to have two point-like components with a relative position offset of −426 microarcseconds ( μ as) in R.A. and −66 μ as in decl. The two components are almost equally bright, with a flux-density ratio of 0.92. The standard deviation of closure delays for source 0642+449 was reduced from 139 to 90 ps by using this two-component model. Closure delays larger than 1 ns are found to be related to the source structure, demonstrating that structure effects for a source with this simple structure could be up to tens of nanoseconds. The method described in this paper does not rely on a priori source structure information, such as knowledge of source structure determined from direct (Fourier) imaging of the same observations or observations at other epochs. We anticipate our study to be a starting point for more effective determination of the structure effect in VLBI observations.

  5. Bottomfish Observer Database - Legacy

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains data collected by at sea observers in the Bottomfish Observer Program in the Northwestern Hawaiian Islands from October 2003 - April 2006.

  6. Viking observations at the source region of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Bahnsen, A.; Jespersen, M.; Ungstrup, E.; Pedersen, B.M.; Eliasson, L.; Murphree, J.S.; Elphinstone, R.D.; Blomberg, L.; Holmgren, G.; Zanetti, L.J.

    1989-01-01

    The orbit of the Swedish satellite Viking was optimized for in situ observations of auroral particle acceleration and related phenomena. In a large number of the orbits, auroral kilometric radiation (AKR) was observed, and in approximately 35 orbits the satellite passed through AKR source regions as evidenced by very strong signals at the local electron cyclotron frequency f ce . These sources were found at the poleward edge of the auroral oval at altitudes, from 5,000 to 8,000 km, predominantly in the evening sector. The strong AKR signal has a sharp low-frequency cutoff at or very close to f ce in the source. In addition to AKR, strong broadband electrostatic noise is measured during the source crossings. Energetic (1-15 keV) electrons are always present at and around the AKR sources. Upward directed ion beams of several keV are closely correlated with the source as are strong and variable electric fields, indicating that a region of upward pointing electric field below the observation point is a necessary condition for AKR generation. The plasma density is measured by three independent experiments and it is generally found that the density is low across the whole auroral oval. For some source crossings the three methods agree and show a density depletion (but not always confined to the source region itself), but in many cases the three measurements do not yield consistent results. The magnetic projection of the satellite passes through auroral forms during the source crossings, and the strongest AKR events seem to be connected with kinks in an arc or more complicated structures

  7. Existing data sources for clinical epidemiology: the Danish Patient Compensation Association database.

    Science.gov (United States)

    Tilma, Jens; Nørgaard, Mette; Mikkelsen, Kim Lyngby; Johnsen, Søren Paaske

    2015-01-01

    Any patient in the Danish health care system who experiences a treatment injury can make a compensation claim to the Danish Patient Compensation Association (DPCA) free of charge. The aim of this paper is to describe the DPCA database as a source of data for epidemiological research. Data to DPCA are collected prospectively on all claims and include information on patient factors and health records, system factors, and administrative data. Approval of claims is based on injury due to the principle of treatment below experienced specialist standard or intolerable, unexpected extensiveness of injury. Average processing time of a compensation claim is 6-8 months. Data collection is nationwide and started in 1992. The patient's central registration system number, a unique personal identifier, allows for data linkage to other registries such as the Danish National Patient Registry. The DPCA data are accessible for research following data usage permission and make it possible to analyze all claims or specific subgroups to identify predictors, outcomes, etc. DPCA data have until now been used only in few studies but could be a useful data source in future studies of health care-related injuries.

  8. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  9. Initial validation of the prekindergarten Classroom Observation Tool and goal setting system for data-based coaching.

    Science.gov (United States)

    Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H

    2013-12-01

    Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. The importance of source positions during radio fine structure observations

    International Nuclear Information System (INIS)

    Chernov, Guennadi P.; Yan Yi-Hua; Fu Qi-Jun

    2014-01-01

    The measurement of positions and sizes of radio sources in the observations of the fine structure of solar radio bursts is a determining factor for the selection of the radio emission mechanism. The identical parameters describing the radio sources for zebra structures (ZSs) and fiber bursts confirm there is a common mechanism for both structures. It is very important to measure the size of the source in the corona to determine if it is distributed along the height or if it is point-like. In both models of ZSs (the double plasma resonance (DPR) and the whistler model) the source must be distributed along the height, but by contrast to the stationary source in the DPR model, in the whistler model the source should be moving. Moreover, the direction of the space drift of the radio source must correlate with the frequency drift of stripes in the dynamic spectrum. Some models of ZSs require a local source, for example, the models based on the Bernstein modes, or on explosive instability. The selection of the radio emission mechanism for fast broadband pulsations with millisecond duration also depends on the parameters of their radio sources. (mini-volume: solar radiophysics — recent results on observations and theories)

  11. Reduction of EAO Positional Observations Database

    Science.gov (United States)

    Nefedyev, Yuri; Andreev, Alexey; Demina, Natalya; Churkin, Konstantin

    2016-07-01

    There is a large data bank of positional observations of Solar System bodies at Engelhadt Astronomical Observatory (EAO). The positional observations include the major planets, except Jupiter. Modern technologies replace classical methods of observations in astronomy and in astrometry as well. At the same time many positional observations have been gathered at astronomical observatories. So taking into account that observations of the past epochs have presenteda great value for astronomy and as times goes by their importance is growing it is obvious that positional astrometry will not lose its practical importance. This was noted in B3 XXIV IAU resolution by the General Assembly. The results of reduction of solar system bodies observations were published mainly in Proceeding of EAO and Transactions of Kazan City Astronomical Observatory. Earlier there have been made about three thousand observations at EAO and Zelenchuk station with the Zeiss telescope (D=400mm, f=2000mm), AFR-18 (photo visual, D=200, f=2000), refractor (D=400mm, f=3450mm), Meniscus camera (D=340mm, f=1200mm), Schmidt camera (D=350mm, f=2000mm). The major planets except Pluto and Neptune were observed with a special cassette chamber equipped with a rotating disk which had an open sector to reduce the brightness of the planets. The dimension of the sector was chosen accordingto the brightness of the planets. The disk was placed in the centre of the astrograph's field. The stars' true brightness was preserved. A large number of catalogues were compiled by the end of the 20th century. We used Tycho-2 catalogue for reducing our observations. As it is known the catalogue Tycho-2 (Tycho-2 catalogue, 2000) includes 2539913 stars. The stars' proper motions given in the catalogue were obtained by comparing positions from Tycho-2 with positions from the Astrographic Catalogue. Therefore they are considered to be highly accurate. The accuracy of stellar positions in Tycho-2 is about 60 mas and the accuracy of

  12. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    Science.gov (United States)

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  13. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  14. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  15. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  16. A Review of Stellar Abundance Databases and the Hypatia Catalog Database

    Science.gov (United States)

    Hinkel, Natalie Rose

    2018-01-01

    The astronomical community is interested in elements from lithium to thorium, from solar twins to peculiarities of stellar evolution, because they give insight into different regimes of star formation and evolution. However, while some trends between elements and other stellar or planetary properties are well known, many other trends are not as obvious and are a point of conflict. For example, stars that host giant planets are found to be consistently enriched in iron, but the same cannot be definitively said for any other element. Therefore, it is time to take advantage of large stellar abundance databases in order to better understand not only the large-scale patterns, but also the more subtle, small-scale trends within the data.In this overview to the special session, I will present a review of large stellar abundance databases that are both currently available (i.e. RAVE, APOGEE) and those that will soon be online (i.e. Gaia-ESO, GALAH). Additionally, I will discuss the Hypatia Catalog Database (www.hypatiacatalog.com) -- which includes abundances from individual literature sources that observed stars within 150pc. The Hypatia Catalog currently contains 72 elements as measured within ~6000 stars, with a total of ~240,000 unique abundance determinations. The online database offers a variety of solar normalizations, stellar properties, and planetary properties (where applicable) that can all be viewed through multiple interactive plotting interfaces as well as in a tabular format. By analyzing stellar abundances for large populations of stars and from a variety of different perspectives, a wealth of information can be revealed on both large and small scales.

  17. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content-Type... text/plain; charset=ISO-8859-1 ...

  18. Freshwater Biological Traits Database (Traits)

    Science.gov (United States)

    The traits database was compiled for a project on climate change effects on river and stream ecosystems. The traits data, gathered from multiple sources, focused on information published or otherwise well-documented by trustworthy sources.

  19. Hydrometeorological Database (HMDB) for Practical Research in Ecology

    OpenAIRE

    Novakovskiy, A; Elsakov, V

    2014-01-01

    The regional HydroMeteorological DataBase (HMDB) was designed for easy access to climate data via the Internet. It contains data on various climatic parameters (temperature, precipitation, pressure, humidity, and wind strength and direction) from 190 meteorological stations in Russia and bordering countries for a period of instrumental observations of over 100 years. Open sources were used to ingest data into HMDB. An analytical block was also developed to perform the most common statistical ...

  20. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  1. THE CHANDRA SURVEY OF EXTRAGALACTIC SOURCES IN THE 3CR CATALOG: X-RAY EMISSION FROM NUCLEI, JETS, AND HOTSPOTS IN THE CHANDRA ARCHIVAL OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Massaro, F. [Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino (Italy); Harris, D. E.; Paggi, A.; Wilkes, B. J.; Kuraszkiewicz, J. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Liuzzo, E.; Orienti, M.; Paladino, R. [Istituto di Radioastronomia, INAF, via Gobetti 101, I-40129, Bologna (Italy); Tremblay, G. R. [Yale Center for Astronomy and Astrophysics, Physics Department, Yale University, P.O. Box 208120, New Haven, CT 06520-8120 (United States); Baum, S. A.; O’Dea, C. P. [University of Manitoba, Dept of Physics and Astronomy, Winnipeg, MB R3T 2N2 (Canada)

    2015-09-15

    As part of our program to build a complete radio and X-ray database of all Third Cambridge catalog extragalactic radio sources, we present an analysis of 93 sources for which Chandra archival data are available. Most of these sources have already been published. Here we provide a uniform re-analysis and present nuclear X-ray fluxes and X-ray emission associated with radio jet knots and hotspots using both publicly available radio images and new radio images that have been constructed from data available in the Very Large Array archive. For about 1/3 of the sources in the selected sample, a comparison between the Chandra and radio observations was not reported in the literature: we find X-ray detections of 2 new radio jet knots and 17 hotspots. We also report the X-ray detection of extended emission from the intergalactic medium for 15 galaxy clusters.

  2. VLA OH Zeeman Observations of the NGC 6334 Complex Source A

    Science.gov (United States)

    Mayo, E. A.; Sarma, A. P.; Troland, T. H.; Abel, N. P.

    2004-12-01

    We present a detailed analysis of the NGC 6334 complex source A, a compact continuum source in the SW region of the complex. Our intent is to determine the significance of the magnetic field in the support of the surrounding molecular cloud against gravitational collapse. We have performed OH 1665 and 1667 MHz observations taken with the Very Large Array in the BnA configuration and combined these data with the lower resolution CnB data of Sarma et al. (2000). These observations reveal magnetic fields with values of the order of 350 μ G toward source A, with maximum fields reaching 500 μ G. We have also theoretically modeled the molecular cloud surrounding source A using Cloudy, with the constraints to the model based on observation. This model provides significant information on the density of H2 through the cloud and also the relative density of H2 to OH which is important to our analysis of the region. We will combine the knowledge gained through the Cloudy modeling with Virial estimates to determine the significance of the magnetic field to the dynamics and evolution of source A.

  3. Digitizing Olin Eggen's Card Database

    Science.gov (United States)

    Crast, J.; Silvis, G.

    2017-06-01

    The goal of the Eggen Card Database Project is to recover as many of the photometric observations from Olin Eggen's Card Database as possible and preserve these observations, in digital forms that are accessible by anyone. Any observations of interest to the AAVSO will be added to the AAVSO International Database (AID). Given to the AAVSO on long-term loan by the Cerro Tololo Inter-American Observatory, the database is a collection of over 78,000 index cards holding all Eggen's observations made between 1960 and 1990. The cards were electronically scanned and the resulting 108,000 card images have been published as a series of 2,216 PDF files, which are available from the AAVSO web site. The same images are also stored in an AAVSO online database where they are indexed by star name and card content. These images can be viewed using the eggen card portal online tool. Eggen made observations using filter bands from five different photometric systems. He documented these observations using 15 different data recording formats. Each format represents a combination of filter magnitudes and color indexes. These observations are being transcribed onto spreadsheets, from which observations of value to the AAVSO are added to the AID. A total of 506 U, B, V, R, and I observations were added to the AID for the variable stars S Car and l Car. We would like the reader to search through the card database using the eggen card portal for stars of particular interest. If such stars are found and retrieval of the observations is desired, e-mail the authors, and we will be happy to help retrieve those data for the reader.

  4. E-SovTox: An online database of the main publicly-available sources of toxicity data concerning REACH-relevant chemicals published in the Russian language.

    Science.gov (United States)

    Sihtmäe, Mariliis; Blinova, Irina; Aruoja, Villem; Dubourguier, Henri-Charles; Legrand, Nicolas; Kahru, Anne

    2010-08-01

    A new open-access online database, E-SovTox, is presented. E-SovTox provides toxicological data for substances relevant to the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system, from publicly-available Russian language data sources. The database contains information selected mainly from scientific journals published during the Soviet Union era. The main information source for this database - the journal, Gigiena Truda i Professional'nye Zabolevania [Industrial Hygiene and Occupational Diseases], published between 1957 and 1992 - features acute, but also chronic, toxicity data for numerous industrial chemicals, e.g. for rats, mice, guinea-pigs and rabbits. The main goal of the abovementioned toxicity studies was to derive the maximum allowable concentration limits for industrial chemicals in the occupational health settings of the former Soviet Union. Thus, articles featured in the database include mostly data on LD50 values, skin and eye irritation, skin sensitisation and cumulative properties. Currently, the E-SovTox database contains toxicity data selected from more than 500 papers covering more than 600 chemicals. The user is provided with the main toxicity information, as well as abstracts of these papers in Russian and in English (given as provided in the original publication). The search engine allows cross-searching of the database by the name or CAS number of the compound, and the author of the paper. The E-SovTox database can be used as a decision-support tool by researchers and regulators for the hazard assessment of chemical substances. 2010 FRAME.

  5. Optical observations of binary X-ray sources

    International Nuclear Information System (INIS)

    Charles, P.

    1982-01-01

    Here I shall consider only those systems where the compact object is a neutron star (or in a few cases perhaps a black hole). Since van Paradijs (1982) has recently produced an excellent and comprehensive review of optical observations of compact galactic X-ray sources I shall summarise the basic properties of the optical counterparts and discuss a few representative systems in some detail. (orig./WL)

  6. Analysis of commercial and public bioactivity databases.

    Science.gov (United States)

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  7. Integrated database for rapid mass movements in Norway

    Directory of Open Access Journals (Sweden)

    C. Jaedicke

    2009-03-01

    terrain of the Norwegian west coast, but major events are recorded all over the country. Snow avalanches account for most fatalities, while large rock slides causing flood waves and huge quick clay slides are the most damaging individual events in terms of damage to infrastructure and property and for causing multiple fatalities. The quality of the data is strongly influenced by the personal engagement of local observers and varying observation routines. This database is a unique source for statistical analysis including, risk analysis and the relation between rapid mass movements and climate. The database of rapid mass movement events will also facilitate validation of national hazard and risk maps.

  8. A New Database Facilitates Characterization of Flavonoid Intake, Sources, and Positive Associations with Diet Quality among US Adults.

    Science.gov (United States)

    Sebastian, Rhonda S; Wilkinson Enns, Cecilia; Goldman, Joseph D; Martin, Carrie L; Steinfeldt, Lois C; Murayi, Theophile; Moshfegh, Alanna J

    2015-06-01

    Epidemiologic studies demonstrate inverse associations between flavonoid intake and chronic disease risk. However, lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intakes and determine associations with diet quality. Using a newly released database of flavonoid values, this study sought to describe intake and sources of total flavonoids and 6 flavonoid classes and identify associations between flavonoid intake and the Healthy Eating Index (HEI) 2010. One day of 24-h dietary recall data from adults aged ≥ 20 y (n = 5420) collected in What We Eat in America (WWEIA), NHANES 2007-2008, were analyzed. Flavonoid intakes were calculated using the USDA Flavonoid Values for Survey Foods and Beverages 2007-2008. Regression analyses were conducted to provide adjusted estimates of flavonoid intake, and linear trends in total and component HEI scores by flavonoid intake were assessed using orthogonal polynomial contrasts. All analyses were weighted to be nationally representative. Mean intake of flavonoids was 251 mg/d, with flavan-3-ols accounting for 81% of intake. Non-Hispanic whites had significantly higher (P empty calories increased (P < 0.001) across flavonoid intake quartiles. A new database that permits comprehensive estimation of flavonoid intakes in WWEIA, NHANES 2007-2008; identification of their major food/beverage sources; and determination of associations with dietary quality will lead to advances in research on relations between flavonoid intake and health. Findings suggest that diet quality, as measured by HEI, is positively associated with flavonoid intake. © 2015 American Society for Nutrition.

  9. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  10. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    Science.gov (United States)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  11. TRAM (Transcriptome Mapper: database-driven creation and analysis of transcriptome maps from multiple sources

    Directory of Open Access Journals (Sweden)

    Danieli Gian

    2011-02-01

    Full Text Available Abstract Background Several tools have been developed to perform global gene expression profile data analysis, to search for specific chromosomal regions whose features meet defined criteria as well as to study neighbouring gene expression. However, most of these tools are tailored for a specific use in a particular context (e.g. they are species-specific, or limited to a particular data format and they typically accept only gene lists as input. Results TRAM (Transcriptome Mapper is a new general tool that allows the simple generation and analysis of quantitative transcriptome maps, starting from any source listing gene expression values for a given gene set (e.g. expression microarrays, implemented as a relational database. It includes a parser able to assign univocal and updated gene symbols to gene identifiers from different data sources. Moreover, TRAM is able to perform intra-sample and inter-sample data normalization, including an original variant of quantile normalization (scaled quantile, useful to normalize data from platforms with highly different numbers of investigated genes. When in 'Map' mode, the software generates a quantitative representation of the transcriptome of a sample (or of a pool of samples and identifies if segments of defined lengths are over/under-expressed compared to the desired threshold. When in 'Cluster' mode, the software searches for a set of over/under-expressed consecutive genes. Statistical significance for all results is calculated with respect to genes localized on the same chromosome or to all genome genes. Transcriptome maps, showing differential expression between two sample groups, relative to two different biological conditions, may be easily generated. We present the results of a biological model test, based on a meta-analysis comparison between a sample pool of human CD34+ hematopoietic progenitor cells and a sample pool of megakaryocytic cells. Biologically relevant chromosomal segments and gene

  12. The Hanford Site generic component failure-rate database compared with other generic failure-rate databases

    International Nuclear Information System (INIS)

    Reardon, M.F.; Zentner, M.D.

    1992-11-01

    The Risk Assessment Technology Group, Westinghouse Hanford Company (WHC), has compiled a component failure rate database to be used during risk and reliability analysis of nonreactor facilities. Because site-specific data for the Hanford Site are generally not kept or not compiled in a usable form, the database was assembled using information from a variety of other established sources. Generally, the most conservative failure rates were chosen from the databases reviewed. The Hanford Site database has since been used extensively in fault tree modeling of many Hanford Site facilities and systems. The purpose of this study was to evaluate the reasonableness of the data chosen for the Hanford Site database by comparing the values chosen with the values from the other databases

  13. Far-infrared observations of Sagittarius B2 - reconsideration of source structure

    International Nuclear Information System (INIS)

    Thronson, H.A. Jr.; Harper, D.A.; Yerkes Observatory, Williams Bay, WI)

    1986-01-01

    New moderate-angular-resolution far-infrared observations of the Sagittarius B2 star-forming region are presented, discussed, and compared with recent radio molecular and continuum observations of this source. In contrast to previous analyses, its far-infrared spectrum is interpreted as the result of a massive frigid cloud overlying a more-or-less normal infrared source, a natural explanation for the object's previously-noted peculiarities. The characteristics derived for the obscuring cloud are similar to those found for the W51 MAIN object. Both sources have high sub-millimeter surface brightness, a high ratio of sub-millimeter to far-infrared flux, and numerous regions of molecular maser emission. 28 references

  14. Observational constraints on the cosmological evolution of extragalactic radio sources

    International Nuclear Information System (INIS)

    Perryman, M.A.C.

    1979-11-01

    The thesis discusses statistical studies of the remote radio sources, taking into account the various parameters for such sources, based on data from the various Cambridge Catalogues. Some of the sources have optical counterparts which yield distances from their redshifts. Combining optical and radio observations, an attempt is made to investigate whether large-scale evolution of galaxies occurs as one looks backwards in time to early epochs. Special attention is paid to ensuring that the optical identifications of the selected radio sources are sound and that the selection procedures do not distort the inferences obtained. (U.K.)

  15. The IAEA Illicit Trafficking Database Programme: Operations and Structure

    International Nuclear Information System (INIS)

    2010-01-01

    the IAEA I TDB currently has 90 states participating voluntarily to the database. Information on about 827 incidents of which 500 involved radioactive sources has been reported. States provide information by submitting an Information Notification Form. The incident is assigned an identification number and entered in the database. Information from open sources is collected daily and reviewed. If the information warrants it a new incident is created in the database.

  16. Database on wind characteristics - contents of database bank

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  17. ASM-Triggered Too Observations of Kilohertz Oscillations in Three Atoll Sources

    Science.gov (United States)

    Kaaret, P.; Swank, Jean (Technical Monitor)

    2000-01-01

    Three Rossi Timing Explorer (RXTE) observations were carried out for this proposal based on target of opportunity triggers derived from the All-Sky Monitor (ASM) on RXTE. We obtained short observations of 4U1636-536 (15ks) and 4U1735-44 (23ks) and a longer observation of 4U0614+091 (117ks). Our analysis of our observations of the atoll neutron star x-ray binary 4U1735-44 lead to the discovery of a second high frequency quasiperiodic oscillation (QPO) in this source. These results were published in the Astrophysical Journal Letters. The data obtained on the source 4U0614+091 were used in a comprehensive study of this source, which will be published in the Astrophysical Journal. The data from this proposal were particularly critical for that study as they lead to the detection of the highest QPO frequency every found in the x-ray emission from an x-ray binary which will be important in placing limits on the equation of state of nuclear matter.

  18. Analysis of large databases in vascular surgery.

    Science.gov (United States)

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  19. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  20. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  1. Online Sources of Competitive Intelligence.

    Science.gov (United States)

    Wagers, Robert

    1986-01-01

    Presents an approach to using online sources of information for competitor intelligence (i.e., monitoring industry and tracking activities of competitors); identifies principal sources; and suggests some ways of making use of online databases. Types and sources of information and sources and database charts are appended. Eight references are…

  2. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  3. OSO-7 observations of high galactic latitude x-ray sources

    International Nuclear Information System (INIS)

    Markert, T.H.; Canizares, C.R.; Clark, G.W.; Li, F.K.; Northridge, P.L.; Sprott, G.F.; Wargo, G.F.

    1976-01-01

    Six hundred days of observations by the MIT X-ray detectors aboard OSO-7 have been analyzed. All-sky maps of X-ray intensity have been constructed from these data. A sample map is displayed. Seven sources with galactic latitude vertical-barb/subi//subi/vertical-bar>10degree, discovered during the mapping process, are reported, and upper limits are set on other high-latitude sources. The OSO-7 results are compared with those of Uhuru and an implication of this comparison, that many of the high-latitude sources may be variable, is discussed

  4. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    Science.gov (United States)

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  5. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    Science.gov (United States)

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  6. LAMOST OBSERVATIONS IN THE KEPLER FIELD. I. DATABASE OF LOW-RESOLUTION SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Cat, P. De; Ren, A. B.; Yang, X. H. [Royal observatory of Belgium, Ringlaan 3, B-1180 Brussel (Belgium); Fu, J. N. [Department of Astronomy, Beijing Normal University, 19 Avenue Xinjiekouwai, Beijing 100875 (China); Shi, J. R.; Luo, A. L.; Yang, M.; Wang, J. L.; Zhang, H. T.; Shi, H. M.; Zhang, W. [Key Lab for Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Dong, Subo [Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Road 5, Hai Dian District, Beijing, 100871 (China); Catanzaro, G.; Frasca, A. [INAF—Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Corbally, C. J. [Vatican Observatory Research Group, Steward Observatory, Tucson, AZ 85721-0065 (United States); Gray, R. O. [Department of Physics and Astronomy, Appalachian State University, Boone, NC 28608 (United States); Żakowicz, J. Molenda- [Astronomical Institute of the University of Wrocław, ul. Kopernika 11, 51-622 Wrocław (Poland); Uytterhoeven, K. [Instituto de Astrofísica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Briquet, M. [Institut d’Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août 19C, B-4000 Liège (Belgium); Bruntt, H., E-mail: Peter.DeCat@oma.be [Stellar Astrophysics Center, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); and others

    2015-09-15

    The nearly continuous light curves with micromagnitude precision provided by the space mission Kepler are revolutionizing our view of pulsating stars. They have revealed a vast sea of low-amplitude pulsation modes that were undetectable from Earth. The long time base of Kepler light curves allows for the accurate determination of the frequencies and amplitudes of pulsation modes needed for in-depth asteroseismic modeling. However, for an asteroseismic study to be successful, the first estimates of stellar parameters need to be known and they cannot be derived from the Kepler photometry itself. The Kepler Input Catalog provides values for the effective temperature, surface gravity, and metallicity, but not always with sufficient accuracy. Moreover, information on the chemical composition and rotation rate is lacking. We are collecting low-resolution spectra for objects in the Kepler field of view with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (lamost, Xinglong observatory, China). All of the requested fields have now been observed at least once. In this paper, we describe those observations and provide a useful database for the whole astronomical community.

  7. Existing data sources for clinical epidemiology: the Danish Patient Compensation Association database

    Directory of Open Access Journals (Sweden)

    Tilma J

    2015-07-01

    Full Text Available Jens Tilma,1 Mette Nørgaard,1 Kim Lyngby Mikkelsen,2 Søren Paaske Johnsen1 1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 2Danish Patient Compensation Association, Copenhagen, Denmark Abstract: Any patient in the Danish health care system who experiences a treatment injury can make a compensation claim to the Danish Patient Compensation Association (DPCA free of charge. The aim of this paper is to describe the DPCA database as a source of data for epidemiological research. Data to DPCA are collected prospectively on all claims and include information on patient factors and health records, system factors, and administrative data. Approval of claims is based on injury due to the principle of treatment below experienced specialist standard or intolerable, unexpected extensiveness of injury. Average processing time of a compensation claim is 6–8 months. Data collection is nationwide and started in 1992. The patient's central registration system number, a unique personal identifier, allows for data linkage to other registries such as the Danish National Patient Registry. The DPCA data are accessible for research following data usage permission and make it possible to analyze all claims or specific subgroups to identify predictors, outcomes, etc. DPCA data have until now been used only in few studies but could be a useful data source in future studies of health care-related injuries. Keywords: public health care, treatment injuries, no-fault compensation, registries, research, Denmark

  8. Technical Note: A new global database of trace gases and aerosols from multiple sources of high vertical resolution measurements

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2008-09-01

    Full Text Available A new database of trace gases and aerosols with global coverage, derived from high vertical resolution profile measurements, has been assembled as a collection of binary data files; hereafter referred to as the "Binary DataBase of Profiles" (BDBP. Version 1.0 of the BDBP, described here, includes measurements from different satellite- (HALOE, POAM II and III, SAGE I and II and ground-based measurement systems (ozonesondes. In addition to the primary product of ozone, secondary measurements of other trace gases, aerosol extinction, and temperature are included. All data are subjected to very strict quality control and for every measurement a percentage error on the measurement is included. To facilitate analyses, each measurement is added to 3 different instances (3 different grids of the database where measurements are indexed by: (1 geographic latitude, longitude, altitude (in 1 km steps and time, (2 geographic latitude, longitude, pressure (at levels ~1 km apart and time, (3 equivalent latitude, potential temperature (8 levels from 300 K to 650 K and time.

    In contrast to existing zonal mean databases, by including a wider range of measurement sources (both satellite and ozonesondes, the BDBP is sufficiently dense to permit calculation of changes in ozone by latitude, longitude and altitude. In addition, by including other trace gases such as water vapour, this database can be used for comprehensive radiative transfer calculations. By providing the original measurements rather than derived monthly means, the BDBP is applicable to a wider range of applications than databases containing only monthly mean data. Monthly mean zonal mean ozone concentrations calculated from the BDBP are compared with the database of Randel and Wu, which has been used in many earlier analyses. As opposed to that database which is generated from regression model fits, the BDBP uses the original (quality controlled measurements with no smoothing applied in any

  9. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  10. The Significance of HIV ‘Blips’ in Resource-Limited Settings: Is It the Same? Analysis of the Treat Asia HIV Observational Database (TAHOD) and the Australian HIV Observational Database (AHOD)

    Science.gov (United States)

    Kanapathipillai, Rupa; McManus, Hamish; Kamarulzaman, Adeeba; Lim, Poh Lian; Templeton, David J.; Law, Matthew; Woolley, Ian

    2014-01-01

    Introduction Magnitude and frequency of HIV viral load blips in resource-limited settings, has not previously been assessed. This study was undertaken in a cohort from a high income country (Australia) known as AHOD (Australian HIV Observational Database) and another cohort from a mixture of Asian countries of varying national income per capita, TAHOD (TREAT Asia HIV Observational Database). Methods Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed. Results 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; pfailure (p = 0.360 for blip 50–≤1000, p = 0.309 for blip 50–≤400 and p = 0.300 for blip 50–≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip. Conclusion Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips. PMID:24516527

  11. CORE-Hom: a powerful and exhaustive database of clinical trials in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Moss, Sian; Tournier, Alexander; Lüdtke, Rainer; Albrecht, Henning

    2014-10-01

    The CORE-Hom database was created to answer the need for a reliable and publicly available source of information in the field of clinical research in homeopathy. As of May 2014 it held 1048 entries of clinical trials, observational studies and surveys in the field of homeopathy, including second publications and re-analyses. 352 of the trials referenced in the database were published in peer reviewed journals, 198 of which were randomised controlled trials. The most often used remedies were Arnica montana (n = 103) and Traumeel(®) (n = 40). The most studied medical conditions were respiratory tract infections (n = 126) and traumatic injuries (n = 110). The aim of this article is to introduce the database to the public, describing and explaining the interface, features and content of the CORE-Hom database. Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  12. Evaluation of Electronic Healthcare Databases for Post-Marketing Drug Safety Surveillance and Pharmacoepidemiology in China.

    Science.gov (United States)

    Yang, Yu; Zhou, Xiaofeng; Gao, Shuangqing; Lin, Hongbo; Xie, Yanming; Feng, Yuji; Huang, Kui; Zhan, Siyan

    2018-01-01

    Electronic healthcare databases (EHDs) are used increasingly for post-marketing drug safety surveillance and pharmacoepidemiology in Europe and North America. However, few studies have examined the potential of these data sources in China. Three major types of EHDs in China (i.e., a regional community-based database, a national claims database, and an electronic medical records [EMR] database) were selected for evaluation. Forty core variables were derived based on the US Mini-Sentinel (MS) Common Data Model (CDM) as well as the data features in China that would be desirable to support drug safety surveillance. An email survey of these core variables and eight general questions as well as follow-up inquiries on additional variables was conducted. These 40 core variables across the three EHDs and all variables in each EHD along with those in the US MS CDM and Observational Medical Outcomes Partnership (OMOP) CDM were compared for availability and labeled based on specific standards. All of the EHDs' custodians confirmed their willingness to share their databases with academic institutions after appropriate approval was obtained. The regional community-based database contained 1.19 million people in 2015 with 85% of core variables. Resampled annually nationwide, the national claims database included 5.4 million people in 2014 with 55% of core variables, and the EMR database included 3 million inpatients from 60 hospitals in 2015 with 80% of core variables. Compared with MS CDM or OMOP CDM, the proportion of variables across the three EHDs available or able to be transformed/derived from the original sources are 24-83% or 45-73%, respectively. These EHDs provide potential value to post-marketing drug safety surveillance and pharmacoepidemiology in China. Future research is warranted to assess the quality and completeness of these EHDs or additional data sources in China.

  13. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    Science.gov (United States)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early

  14. The HISTMAG database: combining historical, archaeomagnetic and volcanic data

    Science.gov (United States)

    Arneitz, Patrick; Leonhardt, Roman; Schnepp, Elisabeth; Heilig, Balázs; Mayrhofer, Franziska; Kovacs, Peter; Hejda, Pavel; Valach, Fridrich; Vadasz, Gergely; Hammerl, Christa; Egli, Ramon; Fabian, Karl; Kompein, Niko

    2017-09-01

    Records of the past geomagnetic field can be divided into two main categories. These are instrumental historical observations on the one hand, and field estimates based on the magnetization acquired by rocks, sediments and archaeological artefacts on the other hand. In this paper, a new database combining historical, archaeomagnetic and volcanic records is presented. HISTMAG is a relational database, implemented in MySQL, and can be accessed via a web-based interface (http://www.conrad-observatory.at/zamg/index.php/data-en/histmag-database). It combines available global historical data compilations covering the last ∼500 yr as well as archaeomagnetic and volcanic data collections from the last 50 000 yr. Furthermore, new historical and archaeomagnetic records, mainly from central Europe, have been acquired. In total, 190 427 records are currently available in the HISTMAG database, whereby the majority is related to historical declination measurements (155 525). The original database structure was complemented by new fields, which allow for a detailed description of the different data types. A user-comment function provides the possibility for a scientific discussion about individual records. Therefore, HISTMAG database supports thorough reliability and uncertainty assessments of the widely different data sets, which are an essential basis for geomagnetic field reconstructions. A database analysis revealed systematic offset for declination records derived from compass roses on historical geographical maps through comparison with other historical records, while maps created for mining activities represent a reliable source.

  15. Source location of chorus emissions observed by Cluster

    Directory of Open Access Journals (Sweden)

    M. Parrot

    Full Text Available One of the objectives of the Cluster mission is to study sources of various electromagnetic waves using the four satellites. This paper describes the methods we have applied to data recorded from the STAFF spectrum analyser. This instrument provides the cross spectral matrix of three magnetic and two electric field components. This spectral matrix is analysed to determine, for each satellite, the direction of the wave normal relative to the Earth’s magnetic field as a function of frequency and of time. Due to the Cluster orbit, chorus emissions are often observed close to perigee, and the data analysis determines the direction of these waves. Three events observed during different levels of magnetic activity are reported. It is shown that the component of the Poynting vector parallel to the magnetic field changes its sense when the satellites cross the magnetic equator, which indicates that the chorus waves propagate away from the equator. Detailed analysis indicates that the source is located in close vicinity of the plane of the geomagnetic equator.

    Key words. Magnetospheric physics (plasma waves and instabilities; storms and substorms; Space plasma physics (waves and instabilities

  16. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  17. An event database for rotational seismology

    Science.gov (United States)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  18. Diagnosing Tibetan pollutant sources via volatile organic compound observations

    Science.gov (United States)

    Li, Hongyan; He, Qiusheng; Song, Qi; Chen, Laiguo; Song, Yongjia; Wang, Yuhang; Lin, Kui; Xu, Zhencheng; Shao, Min

    2017-10-01

    Atmospheric transport of black carbon (BC) from surrounding areas has been shown to impact the Tibetan environment, and clarifying the geographical source and receptor regions is crucial for providing guidance for mitigation actions. In this study, 10 trace volatile organic compounds (VOCs) sampled across Tibet are chosen as proxies to diagnose source regions and related transport of pollutants to Tibet. The levels of these VOCs in Tibet are higher than those in the Arctic and Antarctic regions but much lower than those observed at many remote and background sites in Asia. The highest VOC level is observed in the eastern region, followed by the southern region and the northern region. A positive matrix factorization (PMF) model found that three factors-industry, biomass burning, and traffic-present different spatial distributions, which indicates that different zones of Tibet are influenced by different VOC sources. The average age of the air masses in the northern and eastern regions is estimated to be 3.5 and 2.8 days using the ratio of toluene to benzene, respectively, which indicates the foreign transport of VOC species to those regions. Back-trajectory analyses show that the Afghanistan-Pakistan-Tajikistan region, Indo-Gangetic Plain (IGP), and Meghalaya-Myanmar region could transport industrial VOCs to different zones of Tibet from west to east. The agricultural bases in northern India could transport biomass burning-related VOCs to the middle-northern and eastern zones of Tibet. High traffic along the unique national roads in Tibet is associated with emissions from local sources and neighboring areas. Our study proposes international joint-control efforts and targeted actions to mitigate the climatic changes and effects associated with VOCs in Tibet, which is a climate sensitive region and an important source of global water.

  19. Coastal Ocean Observing Network - Open Source Architecture for Data Management and Web-Based Data Services

    Science.gov (United States)

    Pattabhi Rama Rao, E.; Venkat Shesu, R.; Udaya Bhaskar, T. V. S.

    2012-07-01

    The observations from the oceans are the backbone for any kind of operational services, viz. potential fishing zone advisory services, ocean state forecast, storm surges, cyclones, monsoon variability, tsunami, etc. Though it is important to monitor open Ocean, it is equally important to acquire sufficient data in the coastal ocean through coastal ocean observing systems for re-analysis, analysis and forecast of coastal ocean by assimilating different ocean variables, especially sub-surface information; validation of remote sensing data, ocean and atmosphere model/analysis and to understand the processes related to air-sea interaction and ocean physics. Accurate information and forecast of the state of the coastal ocean at different time scales is vital for the wellbeing of the coastal population as well as for the socio-economic development of the country through shipping, offshore oil and energy etc. Considering the importance of ocean observations in terms of understanding our ocean environment and utilize them for operational oceanography, a large number of platforms were deployed in the Indian Ocean including coastal observatories, to acquire data on ocean variables in and around Indian Seas. The coastal observation network includes HF Radars, wave rider buoys, sea level gauges, etc. The surface meteorological and oceanographic data generated by these observing networks are being translated into ocean information services through analysis and modelling. Centralized data management system is a critical component in providing timely delivery of Ocean information and advisory services. In this paper, we describe about the development of open-source architecture for real-time data reception from the coastal observation network, processing, quality control, database generation and web-based data services that includes on-line data visualization and data downloads by various means.

  20. Directory of IAEA databases. 4. ed.

    International Nuclear Information System (INIS)

    1997-06-01

    This fourth edition of the Directory of IAEA Databases has been prepared within the Division of NESI. ITs main objective is to describe the computerized information sources available to the public. This directory contains all publicly available databases which are produced at the IAEA. This includes databases stored on the mainframe, LAN servers and user PCs. All IAEA Division Directors have been requested to register the existence of their databases with NESI. At the data of printing, some of the information in the directory will be already obsolete. For the most up-to-date information please see the IAEA's World Wide Web site at URL: http:/www.iaea.or.at/databases/dbdir/. Refs, figs, tabs

  1. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  2. IAEA Illicit Trafficking Database (ITDB)

    International Nuclear Information System (INIS)

    2010-01-01

    The IAEA Illicit Trafficking Database (ITDB) was established in 1995 as a unique network of points of contact connecting 100 states and several international organizations. Information collected from official sources supplemented by open-source reports. The 1994 - GC 38, resolution intensifies the activities through which the Agency is currently supporting Member States in this field. Member states were notified of completed database in 1995 and invited to participate. The purpose of the I TDB is to facilitate exchange of authoritative information among States on incidents of illicit trafficking and other related unauthorized activities involving nuclear and other radioactive materials; to collect, maintain and analyse information on such incidents with a view to identifying common threats, trends, and patterns; use this information for internal planning and prioritisation and provide this information to member states and to provide a reliable source of basic information on such incidents to the media, when appropriate

  3. Database theory and SQL practice using Access

    International Nuclear Information System (INIS)

    Kim, Gyeong Min; Lee, Myeong Jin

    2001-01-01

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  4. Astronomy Education Research Observations from the iSTAR international Study of Astronomical Reasoning Database

    Science.gov (United States)

    Tatge, C. B.; Slater, S. J.; Slater, T. F.; Schleigh, S.; McKinnon, D.

    2016-12-01

    Historically, an important part of the scientific research cycle is to situate any research project within the landscape of the existing scientific literature. In the field of discipline-based astronomy education research, grappling with the existing literature base has proven difficult because of the difficulty in obtaining research reports from around the world, particularly early ones. In order to better survey and efficiently utilize the wide and fractured range and domain of astronomy education research methods and results, the iSTAR international Study of Astronomical Reasoning database project was initiated. The project aims to host a living, online repository of dissertations, theses, journal articles, and grey literature resources to serve the world's discipline-based astronomy education research community. The first domain of research artifacts ingested into the iSTAR database were doctoral dissertations. To the authors' great surprise, nearly 300 astronomy education research dissertations were found from the last 100-years. Few, if any, of the literature reviews from recent astronomy education dissertations surveyed even come close to summarizing this many dissertations, most of which have not been published in traditional journals, as re-publishing one's dissertation research as a journal article was not a widespread custom in the education research community until recently. A survey of the iSTAR database dissertations reveals that the vast majority of work has been largely quantitative in nature until the last decade. We also observe that modern-era astronomy education research writings reaches as far back as 1923 and that the majority of dissertations come from the same eight institutions. Moreover, most of the astronomy education research work has been done covering learners' grasp of broad knowledge of astronomy rather than delving into specific learning targets, which has been more in vogue during the last two decades. The surprisingly wide breadth

  5. Hubble Source Catalog

    Science.gov (United States)

    Lubow, S.; Budavári, T.

    2013-10-01

    We have created an initial catalog of objects observed by the WFPC2 and ACS instruments on the Hubble Space Telescope (HST). The catalog is based on observations taken on more than 6000 visits (telescope pointings) of ACS/WFC and more than 25000 visits of WFPC2. The catalog is obtained by cross matching by position in the sky all Hubble Legacy Archive (HLA) Source Extractor source lists for these instruments. The source lists describe properties of source detections within a visit. The calculations are performed on a SQL Server database system. First we collect overlapping images into groups, e.g., Eta Car, and determine nearby (approximately matching) pairs of sources from different images within each group. We then apply a novel algorithm for improving the cross matching of pairs of sources by adjusting the astrometry of the images. Next, we combine pairwise matches into maximal sets of possible multi-source matches. We apply a greedy Bayesian method to split the maximal matches into more reliable matches. We test the accuracy of the matches by comparing the fluxes of the matched sources. The result is a set of information that ties together multiple observations of the same object. A byproduct of the catalog is greatly improved relative astrometry for many of the HST images. We also provide information on nondetections that can be used to determine dropouts. With the catalog, for the first time, one can carry out time domain, multi-wavelength studies across a large set of HST data. The catalog is publicly available. Much more can be done to expand the catalog capabilities.

  6. The development of software and formation of a database on the main sources of environmental contamination in areas around nuclear power plants

    International Nuclear Information System (INIS)

    Palitskaya, T.A.; Novikov, A.V.; Makeicheva, M.A.; Ivanov, E.A.

    2004-01-01

    Providing of environmental safety control in the process of nuclear power plants (NPPs) operation, environmental protection and rational use of the natural resources is one of the most important tasks of the Rosenergoatom Concern. To ensure the environmental safety, trustworthy, complete and timely information is needed on the natural resources availability and condition, on the natural environment quality and its contamination level. The industrial environmental monitoring allows obtaining, processing and evaluating data for making environmentally acceptable and economically efficient decisions. The industrial environmental monitoring system at NPPs is formed taking into account both radiation and non-radiation factors of impact. Obtaining data on non-radiation factors of the NPP impact is provide by a complex of special observations carried out by NPP's environment protection services. The gained information is transmitted to the Rosenergoatom Concern and input to a database of the Environment Protection Division of the Concern Department of Radiation Safety, Environment Protection and Nuclear Materials Accounting. The database on the main sources of environmental contamination in the areas around NPPs will provide the high level of the environmental control authenticity, maintenance of the set standards, and also - automation of the most labor-consuming and frequently repeating types of operations. he applied software is being developed by specialists from the All-Russia Research Institute of Nuclear Power Plants on the basis of the database management system Microsoft SQL Server using VBA and Microsoft Access. The data will be transmitted through open communication channels. The geo-referenced digital mapping information, basing on the ArcGIS and MapInfo will be the main forms of output data presentation. The Federal authority bodies, their regional units and the Concern's sub-divisions involved in the environmental protection activities will be the database

  7. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  8. Domain Regeneration for Cross-Database Micro-Expression Recognition

    Science.gov (United States)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  9. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  10. Directory of IAEA databases. 3. ed.

    International Nuclear Information System (INIS)

    1993-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information. Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answer to the second two questions (documentation and media) is only listed when information has been made available

  11. Radio and x-ray observations of compact sources in or near supernova remnants

    International Nuclear Information System (INIS)

    Seaquist, E.R.; Gilmore, W.S.

    1982-01-01

    We present VLA multifrequency radio observations of six compact radio sources from the list of nine objects proposed by Ryle et al. [Nature 276, 571 (1978)] as a new class of radio star, possibly the stellar remnants of supernovae. We also present the results of a search for x-ray emission from four of these objects with the Einstein observatory. The radio observations provide information on spectra, polarization, time variability, angular structure, and positions for these sources. The bearing of these new data on the nature of the sources is discussed. One particularly interesting result is that the polarization and angular-size measurements are combined in an astrophysical argument to conclude that one of the sources (2013+370) is extragalactic. No x-ray emission was detected from any of the four objects observed, but an extended x-ray source was found coincident with the supernova remnant G 33.6+0.1 near 1849+005. Our measurements provide no compelling arguments to consider any of the six objects studied as radio stars

  12. Move Over, Word Processors--Here Come the Databases.

    Science.gov (United States)

    Olds, Henry F., Jr.; Dickenson, Anne

    1985-01-01

    Discusses the use of beginning, intermediate, and advanced databases for instructional purposes. A table listing seven databases with information on ease of use, smoothness of operation, data capacity, speed, source, and program features is included. (JN)

  13. The RHIC transfer line cable database

    International Nuclear Information System (INIS)

    Scholl, E.H.; Satogata, T.

    1995-01-01

    A cable database was created to facilitate and document installation of cables and wiring in the RHIC project, as well as to provide a data source to track possible wiring and signal problems. The eight tables of this relational database, currently implemented in Sybase, contain information ranging from cable routing to attenuation of individual wires. This database was created in a hierarchical scheme under the assumption that cables contain wires -- each instance of a cable has one to many wires associated with it. This scheme allows entry of information pertinent to individual wires while only requiring single entries for each cable. Relationships to other RHIC databases are also discussed

  14. Database of Interacting Proteins (DIP)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent...

  15. Observations of Ultra-Luminous X-ray Sources, and Implications

    Science.gov (United States)

    Colbert, E. J. M.

    2004-05-01

    I will review observations of Ultra-Luminous X-ray Sources (ULXs; Lx > 1E39 erg/s), in particular those observations that have helped reveal the nature of these curious objects. Some recent observations suggest that ULXs are a heterogenous class. Although ULX phenomenology is not fully understood, I will present some examples from the (possibly overlapping) sub-classes. Since ULXs are the most luminous objects in starburst galaxies, they, and ``normal'' luminous black-hole high-mass X-ray binaries are intimately tied to the global galaxian X-ray-star-formation connection. Further work is needed to understand how ULXs form, and how they are associated with the putative population of intermediate-mass black holes.

  16. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-05

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  17. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2017-01-01

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  18. ALFRED: An Allele Frequency Database for Microevolutionary Studies

    Directory of Open Access Journals (Sweden)

    Kenneth K Kidd

    2005-01-01

    Full Text Available Many kinds of microevolutionary studies require data on multiple polymorphisms in multiple populations. Increasingly, and especially for human populations, multiple research groups collect relevant data and those data are dispersed widely in the literature. ALFRED has been designed to hold data from many sources and make them available over the web. Data are assembled from multiple sources, curated, and entered into the database. Multiple links to other resources are also established by the curators. A variety of search options are available and additional geographic based interfaces are being developed. The database can serve the human anthropologic genetic community by identifying what loci are already typed on many populations thereby helping to focus efforts on a common set of markers. The database can also serve as a model for databases handling similar DNA polymorphism data for other species.

  19. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    International Nuclear Information System (INIS)

    Arthur, R.C.

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to use a

  20. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, R.C. [Monitor Scientific, LLC, Denver, CO (United States)

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to

  1. The ESID Online Database network.

    Science.gov (United States)

    Guzman, D; Veit, D; Knerr, V; Kindle, G; Gathmann, B; Eades-Perner, A M; Grimbacher, B

    2007-03-01

    Primary immunodeficiencies (PIDs) belong to the group of rare diseases. The European Society for Immunodeficiencies (ESID), is establishing an innovative European patient and research database network for continuous long-term documentation of patients, in order to improve the diagnosis, classification, prognosis and therapy of PIDs. The ESID Online Database is a web-based system aimed at data storage, data entry, reporting and the import of pre-existing data sources in an enterprise business-to-business integration (B2B). The online database is based on Java 2 Enterprise System (J2EE) with high-standard security features, which comply with data protection laws and the demands of a modern research platform. The ESID Online Database is accessible via the official website (http://www.esid.org/). Supplementary data are available at Bioinformatics online.

  2. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  3. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  4. Databases and bookkeeping for HEP experiments

    International Nuclear Information System (INIS)

    Blobel, V.; Cnops, A.-M.; Fisher, S.M.

    1983-09-01

    The term database is explained as well as the requirements for data bases in High Energy physics (HEP). Also covered are the packages used in HEP, summary of user experience, database management systems, relational database management systems for HEP use and observations. (U.K.)

  5. A scalable database model for multiparametric time series: a volcano observatory case study

    Science.gov (United States)

    Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea

    2014-05-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  6. A source representation of microseisms constrained by HV spectral ratio observations

    Science.gov (United States)

    Dreger, D.; Rhie, J.

    2006-12-01

    The microseisms are generated by pressure variation on the sea floor caused by incident and reflected ocean waves, and dominant background noises at short periods. The observations of microseism wave fields in deep sedimentary basins (e.g., Santa Clara Valley) show that the maximum period of the horizontal to vertical (H/V) spectral ratio correlates with basin thickness. A similar correlation has been found in teleseismic arrival times and P-wave amplitude as well as local-earthquake S-wave relative amplification [Dolenc et al., 2005]. This observation infers that a study of microseism wave field, combined with other seismic data sets, can probably be used to invert for the velocity structures of the deep basins. To make this inversion possible, it is necessary to understand the excitation and propagation characteristics of microseisms. We will perform forward computations of microseism wave fields for source representations such as CLVDs and single-forces with the USGS 3D velocity model. Various spatial extensions as well as the frequency content of the source will be tested to match observed shifts in dominant H/V spectral ratio. The optimal source representation of the microseisms will be the first step to accomplish inversions for 3D seismic velocity structure in sedimentary basins using microseisms.

  7. Observations of Intermediate-mass Black Holes and Ultra-Luminous X-ray sources

    Science.gov (United States)

    Colbert, E. J. M.

    2003-12-01

    I will review various observations that suggest that intermediate-mass black holes (IMBHs) with masses ˜102-104 M⊙ exist in our Universe. I will also discuss some of the limitations of these observations. HST Observations of excess dark mass in globular cluster cores suggest IMBHs may be responsible, and some mass estimates from lensing experiments are nearly in the IMBH range. The intriguing Ultra-Luminous X-ray sources (ULXs, or IXOs) are off-nuclear X-ray point sources with X-ray luminosities LX ≳ 1039 erg s-1. ULXs are typically rare (1 in every 5 galaxies), and the nature of their ultra-luminous emission is currently debated. I will discuss the evidence for IMBHs in some ULXs, and briefly outline some phenomenology. Finally, I will discuss future observations that can be made to search for IMBHs.

  8. DBGC: A Database of Human Gastric Cancer

    Science.gov (United States)

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288

  9. APIS : an interactive database of HST-UV observations of the outer planets

    Science.gov (United States)

    Lamy, Laurent; Henry, Florence; Prangé, Renée; Le Sidaner, Pierre

    2014-05-01

    Remote UV measurement of the outer planets offer a wealth of informations on rings, moons, planetary atmospheres and magnetospheres. Auroral emissions in particular provide highly valuable constraints on the auroral processes at work and the underlying coupling between the solar wind, the magnetosphere, the ionosphere and the moons. Key observables provided by high resolution spectro-imaging include the spatial topology and the dynamics of active magnetic field lines, the radiated and the precipitated powers or the energy of precipitating particles. The Hubble Space Telescope (HST) acquired thousands of Far-UV spectra and images of the aurorae of Jupiter, Saturn and Uranus since 1993, feeding in numerous magnetospheric studies. But their use remains generally limited, owing to the difficulty to access and use raw and value-added data. APIS, the egyptian god of fertilization, is also the acronym of a new database (Auroral Planetary Imaging and Spectroscopy), aimed at facilitating the use of HST planetary auroral observations. APIS is based at the Virtual Observatory (VO) of Paris and provides a free and interactive access to a variety of high level data through a simple research interface and standard VO tools (as Aladin, Specview). We will present the capabilities of APIS and illustrate them with several examples.

  10. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  11. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2016-01-01

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  12. Enhanced Fire Events Database to Support Fire PRA

    International Nuclear Information System (INIS)

    Baranowsky, Patrick; Canavan, Ken; St. Germain, Shawn

    2010-01-01

    This paper provides a description of the updated and enhanced Fire Events Data Base (FEDB) developed by the Electric Power Research Institute (EPRI) in cooperation with the U.S. Nuclear Regulatory Commission (NRC). The FEDB is the principal source of fire incident operational data for use in fire PRAs. It provides a comprehensive and consolidated source of fire incident information for nuclear power plants operating in the U.S. The database classification scheme identifies important attributes of fire incidents to characterize their nature, causal factors, and severity consistent with available data. The database provides sufficient detail to delineate important plant specific attributes of the incidents to the extent practical. A significant enhancement to the updated FEDB is the reorganization and refinement of the database structure and data fields and fire characterization details added to more rigorously capture the nature and magnitude of the fire and damage to the ignition source and nearby equipment and structures.

  13. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  14. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell

    2011-08-01

    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  15. HATCHES - a thermodynamic database and management system

    International Nuclear Information System (INIS)

    Cross, J.E.; Ewart, F.T.

    1990-03-01

    The Nirex Safety Assessment Research Programme has been compiling the thermodynamic data necessary to allow simulations of the aqueous behaviour of the elements important to radioactive waste disposal to be made. These data have been obtained from the literature, when available, and validated for the conditions of interest by experiment. In order to maintain these data in an accessible form and to satisfy quality assurance on all data used for assessments, a database has been constructed which resides on a personal computer operating under MS-DOS using the Ashton-Tate dBase III program. This database contains all the input data fields required by the PHREEQE program and, in addition, a body of text which describes the source of the data and the derivation of the PHREEQE input parameters from the source data. The HATCHES system consists of this database, a suite of programs to facilitate the searching and listing of data and a further suite of programs to convert the dBase III files to PHREEQE database format. (Author)

  16. The CERN accelerator measurement database: on the road to federation

    International Nuclear Information System (INIS)

    Roderick, C.; Billen, R.; Gourber-Pace, M.; Hoibian, N.; Peryt, M.

    2012-01-01

    The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change/extension, is required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was fully centralized in the Measurement database itself, reducing significantly the complexity and the actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution. (authors)

  17. A KINETIC DATABASE FOR ASTROCHEMISTRY (KIDA)

    International Nuclear Information System (INIS)

    Wakelam, V.; Pavone, B.; Hébrard, E.; Hersant, F.; Herbst, E.; Loison, J.-C.; Chandrasekaran, V.; Bergeat, A.; Smith, I. W. M.; Adams, N. G.; Bacchus-Montabonel, M.-C.; Béroff, K.; Bierbaum, V. M.; Chabot, M.; Dalgarno, A.; Van Dishoeck, E. F.; Faure, A.; Geppert, W. D.; Gerlich, D.; Galli, D.

    2012-01-01

    We present a novel chemical database for gas-phase astrochemistry. Named the KInetic Database for Astrochemistry (KIDA), this database consists of gas-phase reactions with rate coefficients and uncertainties that will be vetted to the greatest extent possible. Submissions of measured and calculated rate coefficients are welcome, and will be studied by experts before inclusion into the database. Besides providing kinetic information for the interstellar medium, KIDA is planned to contain such data for planetary atmospheres and for circumstellar envelopes. Each year, a subset of the reactions in the database (kida.uva) will be provided as a network for the simulation of the chemistry of dense interstellar clouds with temperatures between 10 K and 300 K. We also provide a code, named Nahoon, to study the time-dependent gas-phase chemistry of zero-dimensional and one-dimensional interstellar sources.

  18. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  19. Spitzer Observations of the X-ray Sources of NGC 4485/90

    Science.gov (United States)

    Vazquez, Gerardo A.; Colbert, E.; Hornschemeier, A.; Malhotra, S.; Roberts, T.; Ward, M.

    2006-06-01

    The mechanism for forming (or igniting) so-called Ultra-Luminous X- ray sources (ULXs) is very poorly understood. In order to investigate the stellar and gaseous environment of ULXs, we have observed the nearby starburst galaxy system NGC 4485/90 with Spitzer's IRAC and IRS instruments. High-quality mid-infrared images and spectra are used to characterize the stellar history of stars near the ULXs, and the ionization state of the surrounding gas. NGC 4485/90 fortuitively hosts six ULXs, and we have analyzed IRAC images and IRS spectra of all six regions. We also observed two "comparison" regions with no X-ray sources. Here we present our preliminary findings on the similarities and differences between the stellar and gaseous components near the ULXs.

  20. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database.

    Science.gov (United States)

    Clary, Christelle M; Kestens, Yan

    2013-06-19

    Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a "fair" representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV

  1. Trends in Solar energy Driven Vertical Ground Source Heat Pump Systems in Sweden - An Analysis Based on the Swedish Well Database

    Science.gov (United States)

    Juhlin, K.; Gehlin, S.

    2016-12-01

    Sweden is a world leader in developing and using vertical ground source heat pump (GSHP) technology. GSHP systems extract passively stored solar energy in the ground and the Earth's natural geothermal energy. Geothermal energy is an admitted renewable energy source in Sweden since 2007 and is the third largest renewable energy source in the country today. The Geological Survey of Sweden (SGU) is the authority in Sweden that provides open access geological data of rock, soil and groundwater for the public. All wells drilled must be registered in the SGU Well Database and it is the well driller's duty to submit registration of drilled wells.Both active and passive geothermal energy systems are in use. Large GSHP systems, with at least 20 boreholes, are active geothermal energy systems. Energy is stored in the ground which allows both comfort heating and cooling to be extracted. Active systems are therefore relevant for larger properties and industrial buildings. Since 1978 more than 600 000 wells (water wells, GSHP boreholes etc) have been registered in the Well Database, with around 20 000 new registrations per year. Of these wells an estimated 320 000 wells are registered as GSHP boreholes. The vast majority of these boreholes are single boreholes for single-family houses. The number of properties with registered vertical borehole GSHP installations amounts to approximately 243 000. Of these sites between 300-350 are large GSHP systems with at least 20 boreholes. While the increase in number of new registrations for smaller homes and households has slowed down after the rapid development in the 80's and 90's, the larger installations for commercial and industrial buildings have increased in numbers over the last ten years. This poster uses data from the SGU Well Database to quantify and analyze the trends in vertical GSHP systems reported between 1978-2015 in Sweden, with special focus on large systems. From the new aggregated data, conclusions can be drawn about

  2. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  3. The Atacama Cosmology Telescope: Development and preliminary results of point source observations

    Science.gov (United States)

    Fisher, Ryan P.

    2009-06-01

    The Atacama Cosmology Telescope (ACT) is a six meter diameter telescope designed to measure the millimeter sky with arcminute angular resolution. The instrument is currently conducting its third season of observations from Cerro Toco in the Chilean Andes. The primary science goal of the experiment is to expand our understanding of cosmology by mapping the temperature fluctuations of the Cosmic Microwave Background (CMB) at angular scales corresponding to multipoles up to [cursive l] ~ 10000. The primary receiver for current ACT observations is the Millimeter Bolometer Array Camera (MBAC). The instrument is specially designed to observe simultaneously at 148 GHz, 218 GHz and 277 GHz. To accomplish this, the camera has three separate detector arrays, each containing approximately 1000 detectors. After discussing the ACT experiment in detail, a discussion of the development and testing of the cold readout electronics for the MBAC is presented. Currently, the ACT collaboration is in the process of generating maps of the microwave sky using our first and second season observations. The analysis used to generate these maps requires careful data calibration to produce maps of the arcminute scale CMB temperature fluctuations. Tests and applications of several elements of the ACT calibrations are presented in the context of the second season observations. Scientific exploration has already begun on preliminary maps made using these calibrations. The final portion of this thesis is dedicated to discussing the point sources observed by the ACT. A discussion of the techniques used for point source detection and photometry is followed by a presentation of our current measurements of point source spectral indices.

  4. Near-infrared observations of the far-infrared source V region in NGC 6334

    International Nuclear Information System (INIS)

    Fischer, J.; Joyce, R.R.; Simon, M.; Simon, T.

    1982-01-01

    We have observed a very red near-infrared source at the center of NGC 6334 FIRS V, a far-infrared source suspected of variability by McBreen et al. The near-infrared source has deep ice and silicate absorption bands, and its half-power size at 20 μm is approx.15'' x 10''. Over the past 2 years we have observed no variability in the near-infrared flux. We have also detected an extended source of H 2 line emission in this region. The total luminosity in the H 2 v-1--0 S(1) line, uncorrected for extinction along the line of sight, is 0.3 L/sub sun/. Detection of emission in high-velocity wings of the J = 1--0 12 CO line suggests that the H 2 emission is associated with a supersonic gas flow

  5. JVLA observations of IC 348 SW: Compact radio sources and their nature

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez, Luis F.; Zapata, Luis A.; Palau, Aina, E-mail: l.rodriguez@crya.unam.mx, E-mail: l.zapata@crya.unam.mx, E-mail: a.palau@crya.unam.mx [Centro de Radioastronomía y Astrofísica, UNAM, Apdo. Postal 3-72 (Xangari), 58089 Morelia, Michoacán (Mexico)

    2014-07-20

    We present sensitive 2.1 and 3.3 cm Jansky Very Large Array radio continuum observations of the region IC 348 SW. We detect a total of 10 compact radio sources in the region, 7 of which are first reported here. One of the sources is associated with the remarkable periodic time-variable infrared source LRLL 54361, opening the possibility of monitoring this object at radio wavelengths. Four of the sources appear to be powering outflows in the region, including HH 211 and HH 797. In the case of the rotating outflow HH 797, we detect a double radio source at its center, separated by ∼3''. Two of the sources are associated with infrared stars that possibly have gyrosynchrotron emission produced in active magnetospheres. Finally, three of the sources are interpreted as background objects.

  6. Behavior observation of major noise sources in critical care wards.

    Science.gov (United States)

    Xie, Hui; Kang, Jian; Mills, Gary H

    2013-12-01

    This study aimed to investigate the behavior patterns of typical noise sources in critical care wards and relate their patterns to health care environment in which the sources adapt themselves in several different forms. An effective observation approach was designed for noise behavior in the critical care environment. Five descriptors have been identified for the behavior observations, namely, interval, frequency, duration, perceived loudness, and location. Both the single-bed and the multiple-bed wards at the selected Critical Care Department were randomly observed for 3 inconsecutive nights, from 11:30 pm to 7:00 am the following morning. The Matlab distribution fitting tool was applied afterward to plot several types of distributions and estimate the corresponding parameters. The lognormal distribution was considered the most appropriate statistical distribution for noise behaviors in terms of the interval and duration patterns. The turning of patients by staff was closely related to the increasing occurrences of noises. Among the observed noises, talking was identified with the highest frequency, shortest intervals, and the longest durations, followed by monitor alarms. The perceived loudness of talking in the nighttime wards was classified into 3 levels (raised, normal, and low). Most people engaged in verbal communication in the single-bed wards that occurred around the Entrance Zone, whereas talking in the multiple-bed wards was more likely to be situated in the Staff Work Zone. As expected, more occurrences of noises along with longer duration were observed in multiple-bed wards rather than single-bed wards. "Monitor plus ventilator alarms" was the most commonly observed combination of multiple noises. © 2013 Elsevier Inc. All rights reserved.

  7. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  8. The development of software and formation of a database on the main sources of environmental contamination in areas around nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Palitskaya, T.A.; Novikov, A.V. [Rosenergoatom Concern, Moscow (Russian Federation); Makeicheva, M.A.; Ivanov, E.A. [All-Russia Research Institute of Nuclear Power Plants, Moscow (Russian Federation)

    2004-07-01

    Providing of environmental safety control in the process of nuclear power plants (NPPs) operation, environmental protection and rational use of the natural resources is one of the most important tasks of the Rosenergoatom Concern. To ensure the environmental safety, trustworthy, complete and timely information is needed on the natural resources availability and condition, on the natural environment quality and its contamination level. The industrial environmental monitoring allows obtaining, processing and evaluating data for making environmentally acceptable and economically efficient decisions. The industrial environmental monitoring system at NPPs is formed taking into account both radiation and non-radiation factors of impact. Obtaining data on non-radiation factors of the NPP impact is provide by a complex of special observations carried out by NPP's environment protection services. The gained information is transmitted to the Rosenergoatom Concern and input to a database of the Environment Protection Division of the Concern Department of Radiation Safety, Environment Protection and Nuclear Materials Accounting. The database on the main sources of environmental contamination in the areas around NPPs will provide the high level of the environmental control authenticity, maintenance of the set standards, and also - automation of the most labor-consuming and frequently repeating types of operations. he applied software is being developed by specialists from the All-Russia Research Institute of Nuclear Power Plants on the basis of the database management system Microsoft SQL Server using VBA and Microsoft Access. The data will be transmitted through open communication channels. The geo-referenced digital mapping information, basing on the ArcGIS and MapInfo will be the main forms of output data presentation. The Federal authority bodies, their regional units and the Concern's sub-divisions involved in the environmental protection activities will be the

  9. Development of a Consumer Product Ingredient Database for ...

    Science.gov (United States)

    Consumer products are a primary source of chemical exposures, yet little structured information is available on the chemical ingredients of these products and the concentrations at which ingredients are present. To address this data gap, we created a database of chemicals in consumer products using product Material Safety Data Sheets (MSDSs) publicly provided by a large retailer. The resulting database represents 1797 unique chemicals mapped to 8921 consumer products and a hierarchy of 353 consumer product “use categories” within a total of 15 top-level categories. We examine the utility of this database and discuss ways in which it will support (i) exposure screening and prioritization, (ii) generic or framework formulations for several indoor/consumer product exposure modeling initiatives, (iii) candidate chemical selection for monitoring near field exposure from proximal sources, and (iv) as activity tracers or ubiquitous exposure sources using “chemical space” map analyses. Chemicals present at high concentrations and across multiple consumer products and use categories that hold high exposure potential are identified. Our database is publicly available to serve regulators, retailers, manufacturers, and the public for predictive screening of chemicals in new and existing consumer products on the basis of exposure and risk. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts resear

  10. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  11. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.

    2015-08-31

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  12. A likely source of an observation report in Ptolemy's Almagest.

    Science.gov (United States)

    Jones, A.

    1999-09-01

    A recently publishes volume of Greek papyri from Oxyrhynchus (modern Bahnasa, Egypt) containing astronomical text, tables, and horoscopes also includes a fragment of a theoretical work on planetary theory. This text, published under the number P.Oxy. LXI 4133, contains the report of an observation of Jupiter's position in AD 104-105 and refers also to another observation of Jupiter made 344 years earlier. The author of the present note has identified tentatively Menelaus of Alexandria as the author of the treatise on planetary theory. Here, he argues that the recovered treatise was very likely Ptolemy's immediate source for the Jupiter observations referred to in the Almagest.

  13. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  14. Infrared observations of gravitational-wave sources in Advanced LIGO's second observing run

    Science.gov (United States)

    Pound Singer, Leo; Kasliwal, Mansi; Lau, Ryan; Cenko, Bradley; Global Relay of Observatories Watching Transients Happen (GROWTH)

    2018-01-01

    Advanced LIGO observed gravitational waves (GWs) from a binary black hole merger in its first observing run (O1) in September 2015. It is anticipated that LIGO and Virgo will soon detect the first binary neutron star mergers. The most promising electromagnetic counterparts to such events are kilonovae: fast, faint transients powered by the radioactive decay of the r-process ejecta. Joint gravitational-wave and electromagnetic observations of such transients hold the key to many longstanding problems, from the nature of short GRBS to the cosmic production sites of the r-process elements to "standard siren" cosmology. Due to the large LIGO/Virgo error regions of 100 deg2, synoptic survey telescopes have dominated the search for LIGO counterparts. Due to the paucity of infrared instruments with multi-deg2 fields of view, infrared observations have been lacking. Near-infrared emission should not only be a more robust signature of kilonovae than optical emission (independent of viewing angle), but should also be several magnitudes brighter and be detectable for much longer, weeks after merger rather than days. In Advanced LIGO's second observing run, we used the FLAMINGOS-2 instrument on Gemini-South to hunt for the near-infrared emission from GW sources by targeted imaging of the most massive galaxies in the LIGO/Virgo localization volumes. We present the results of this campaign, rates, and interpretation of our near-infrared imaging and spectroscopy. We show that leveraging large-scale structure and targeted imaging of the most massive ~10 galaxies in a LIGO/Virgo localization volume may be a surprisingly effective strategy to find the electromagnetic counterpart.

  15. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  16. Cooperation for Common Use of SEE Astronomical Database as a Regional Virtual Observatory in Different Scientific Projects

    Science.gov (United States)

    Pinigin, Gennady; Protsyuk, Yuri; Shulga, Alexander

    The activity of scientific collaborative and co-operative research between South-Eastern European (SEE) observatories is enlarged in the last time. The creation of a common database as a regional virtual observatory is very desirable. The creation of astronomical information resource with a capability of interactive access to databases and telescopes on the base of the general astronomical database of the SEE countries is presented. This resource may be connected with the European network. A short description of the NAO database is presented. The total amount of the NAO information makes about 90 GB, the one obtained from other sources - about 15 GB. The mean diurnal level of the new astronomical information produced with the NAO CCD instruments makes from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The majority of observational data is stored in FITS format. Possibility of using of VO-table format for displaying these data in the Internet is studied. Activities on development and the further refinement of storage, exchange and data processing standards are conducted.

  17. Infrared observations of extragalactic sources

    International Nuclear Information System (INIS)

    Kleinmann, D.E.

    1977-01-01

    The available balloon-borne and airborne infrared data on extragalactic sources, in particular M 82, NGC 1068 and NGC 253, is reviewed and discussed in the context of the extensive groundbased work. The data is examined for the clues they provide on the nature of the ultimate source of the energy radiated and on the mechanism(s) by which it is radiated. Since the discovery of unexpectedly powerful infrared radiation from extragalactic objects - a discovery now about 10 years old - the outstanding problems in this field have been to determine (1) the mechanism by which prodigious amounts of energy are released in the infrared, and (2) the nature of the underlying energy source. (Auth.)

  18. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  19. Cancer, immunodeficiency and antiretroviral treatment: results from the Australian HIV Observational Database (AHOD).

    Science.gov (United States)

    Petoumenos, K; van Leuwen, M T; Vajdic, C M; Woolley, I; Chuah, J; Templeton, D J; Grulich, A E; Law, M G

    2013-02-01

    The objective of the study was to conduct a within-cohort assessment of risk factors for incident AIDS-defining cancers (ADCs) and non-ADCs (NADCs) within the Australian HIV Observational Database (AHOD). A total of 2181 AHOD registrants were linked to the National AIDS Registry/National HIV Database (NAR/NHD) and the Australian Cancer Registry to identify those with a notified cancer diagnosis. Included in the current analyses were cancers diagnosed after HIV infection. Risk factors for cancers were also assessed using logistic regression methods. One hundred and thirty-nine cancer cases were diagnosed after HIV infection among 129 patients. More than half the diagnoses (n = 68; 60%) were ADCs, of which 69% were Kaposi's sarcoma and 31% non-Hodgkin's lymphoma. Among the NADCs, the most common cancers were melanoma (n = 10), lung cancer (n = 6), Hodgkin's lymphoma (n = 5) and anal cancer (n = 5). Over a total of 21021 person-years (PY) of follow-up since HIV diagnosis, the overall crude cancer incidence rate for any cancer was 5.09/1000 PY. The overall rate of cancers decreased from 15.9/1000 PY [95% confidence interval (CI) 9.25-25.40/1000 PY] for CD4 counts 350 cells/μL. Lower CD4 cell count and prior AIDS diagnoses were significant predictors for both ADCs and NADCs. ADCs remain the predominant cancers in this population, although NADC rates have increased in the more recent time period. Immune deficiency is a risk factor for both ADCs and NADCs. © 2012 British HIV Association.

  20. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    Science.gov (United States)

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  1. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  2. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  3. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    Science.gov (United States)

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  4. Human health risk assessment database, 'the NHSRC toxicity value database': Supporting the risk assessment process at US EPA's National Homeland Security Research Center

    International Nuclear Information System (INIS)

    Moudgal, Chandrika J.; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-01-01

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007

  5. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  6. Linking source region and ocean wave parameters with the observed primary microseismic noise

    Science.gov (United States)

    Juretzek, C.; Hadziioannou, C.

    2017-12-01

    In previous studies, the contribution of Love waves to the primary microseismic noise field was found to be comparable to those of Rayleigh waves. However, so far only few studies analysed both wave types present in this microseismic noise band, which is known to be generated in shallow water and the theoretical understanding has mainly evolved for Rayleigh waves only. Here, we study the relevance of different source region parameters on the observed primary microseismic noise levels of Love and Rayleigh waves simultaneously. By means of beamforming and correlation of seismic noise amplitudes with ocean wave heights in the period band between 12 and 15 s, we analysed how source areas of both wave types compare with each other around Europe. The generation effectivity in different source regions was compared to ocean wave heights, peak ocean gravity wave propagation direction and bathymetry. Observed Love wave noise amplitudes correlate comparably well with near coastal ocean wave parameters as Rayleigh waves. Some coastal regions serve as especially effective sources for one or the other wave type. These coincide not only with locations of high wave heights but also with complex bathymetry. Further, Rayleigh and Love wave noise amplitudes seem to depend equally on the local ocean wave heights, which is an indication for a coupled variation with swell height during the generation of both wave types. However, the wave-type ratio varies directionally. This observation likely hints towards a spatially varying importance of different source mechanisms or structural influences. Further, the wave-type ratio is modulated depending on peak ocean wave propagation directions which could indicate a variation of different source mechanism strengths but also hints towards an imprint of an effective source radiation pattern. This emphasizes that the inclusion of both wave types may provide more constraints for the understanding of acting generation mechanisms.

  7. Development of an updated phytoestrogen database for use with the SWAN food frequency questionnaire: intakes and food sources in a community-based, multiethnic cohort study.

    Science.gov (United States)

    Huang, Mei-Hua; Norris, Jean; Han, Weijuan; Block, Torin; Gold, Ellen; Crawford, Sybil; Greendale, Gail A

    2012-01-01

    Phytoestrogens, heterocyclic phenols found in plants, may benefit several health outcomes. However, epidemiologic studies of the health effects of dietary phytoestrogens have yielded mixed results, in part due to challenges inherent in estimating dietary intakes. The goal of this study was to improve the estimates of dietary phytoestrogen consumption using a modified Block Food Frequency Questionnaire (FFQ), a 137-item FFQ created for the Study of Women's Health Across the Nation (SWAN) in 1994. To expand the database of sources from which phytonutrient intakes were computed, we conducted a comprehensive PubMed/Medline search covering January 1994 through September 2008. The expanded database included 4 isoflavones, coumestrol, and 4 lignans. The new database estimated isoflavone content of 105 food items (76.6%) vs. 14 (10.2%) in the 1994 version and computed coumestrol content of 52 food items (38.0%), compared to 1 (0.7%) in the original version. Newly added were lignans; values for 104 FFQ food items (75.9%) were calculated. In addition, we report here the phytonutrient intakes for each racial and language group in the SWAN sample and present major food sources from which the phytonutrients came. This enhanced ascertainment of phytoestrogens will permit improved studies of their health effects.

  8. Global Mammal Parasite Database version 2.0.

    Science.gov (United States)

    Stephens, Patrick R; Pappalardo, Paula; Huang, Shan; Byers, James E; Farrell, Maxwell J; Gehman, Alyssa; Ghai, Ria R; Haas, Sarah E; Han, Barbara; Park, Andrew W; Schmidt, John P; Altizer, Sonia; Ezenwa, Vanessa O; Nunn, Charles L

    2017-05-01

    Illuminating the ecological and evolutionary dynamics of parasites is one of the most pressing issues facing modern science, and is critical for basic science, the global economy, and human health. Extremely important to this effort are data on the disease-causing organisms of wild animal hosts (including viruses, bacteria, protozoa, helminths, arthropods, and fungi). Here we present an updated version of the Global Mammal Parasite Database, a database of the parasites of wild ungulates (artiodactyls and perissodactyls), carnivores, and primates, and make it available for download as complete flat files. The updated database has more than 24,000 entries in the main data file alone, representing data from over 2700 literature sources. We include data on sampling method and sample sizes when reported, as well as both "reported" and "corrected" (i.e., standardized) binomials for each host and parasite species. Also included are current higher taxonomies and data on transmission modes used by the majority of species of parasites in the database. In the associated metadata we describe the methods used to identify sources and extract data from the primary literature, how entries were checked for errors, methods used to georeference entries, and how host and parasite taxonomies were standardized across the database. We also provide definitions of the data fields in each of the four files that users can download. © 2017 by the Ecological Society of America.

  9. Local Group dSph radio survey with ATCA (I): observations and background sources

    Science.gov (United States)

    Regis, Marco; Richter, Laura; Colafrancesco, Sergio; Massardi, Marcella; de Blok, W. J. G.; Profumo, Stefano; Orford, Nicola

    2015-04-01

    Dwarf spheroidal (dSph) galaxies are key objects in near-field cosmology, especially in connection to the study of galaxy formation and evolution at small scales. In addition, dSphs are optimal targets to investigate the nature of dark matter. However, while we begin to have deep optical photometric observations of the stellar population in these objects, little is known so far about their diffuse emission at any observing frequency, and hence on thermal and non-thermal plasma possibly residing within dSphs. In this paper, we present deep radio observations of six local dSphs performed with the Australia Telescope Compact Array (ATCA) at 16 cm wavelength. We mosaicked a region of radius of about 1 deg around three `classical' dSphs, Carina, Fornax, and Sculptor, and of about half of degree around three `ultrafaint' dSphs, BootesII, Segue2, and Hercules. The rms noise level is below 0.05 mJy for all the maps. The restoring beams full width at half-maximum ranged from 4.2 arcsec × 2.5 arcsec to 30.0 arcsec × 2.1 arcsec in the most elongated case. A catalogue including the 1392 sources detected in the six dSph fields is reported. The main properties of the background sources are discussed, with positions and fluxes of brightest objects compared with the FIRST, NVSS, and SUMSS observations of the same fields. The observed population of radio emitters in these fields is dominated by synchrotron sources. We compute the associated source number counts at 2 GHz down to fluxes of 0.25 mJy, which prove to be in agreement with AGN count models.

  10. Database on Aims and Visions in the COINCO Corridor

    DEFF Research Database (Denmark)

    2005-01-01

    This database contains aims and visions regarding overall regional development as well as more specific aims and visions related to transport and infrastructure in the Corridor Oslo-Göteborg-Copenhagen-Berlin. The sources used for this database are the most essential planning documents from Denmark...

  11. Observations of X-ray sources in the Large Magellanic cloud by the OSO-7 satellite

    International Nuclear Information System (INIS)

    Markert, T.H.; Clark, G.W.

    1975-01-01

    Observations of the Large Magellanic Cloud with the 1-40 keV X-ray detectors on the OSO-7 satellite are reported. Results include the discovery of a previously unreported source LMC X-5, measurements of the spectral characteristics of four sources, and observations of their variability on time scales of months

  12. Report on Approaches to Database Translation. Final Report.

    Science.gov (United States)

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  13. Implementing a Dynamic Database-Driven Course Using LAMP

    Science.gov (United States)

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  14. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  15. Multiband Diagnostics of Unidentified 1FGL Sources with Suzaku and Swift X-Ray Observations

    Science.gov (United States)

    Takeuchi, Y.; Kataoka, J.; Maeda, K.; Takahashi, Y.; Nakamori, T.; Tahara, M.

    2013-10-01

    We have analyzed all the archival X-ray data of 134 unidentified (unID) gamma-ray sources listed in the first Fermi/LAT (1FGL) catalog and subsequently followed up by the Swift/XRT. We constructed the spectral energy distributions (SEDs) from radio to gamma-rays for each X-ray source detected, and tried to pick up unique objects that display anomalous spectral signatures. In these analyses, we target all the 1FGL unID sources, using updated data from the second Fermi/LAT (2FGL) catalog on the Large Area Telescope (LAT) position and spectra. We found several potentially interesting objects, particularly three sources, 1FGL J0022.2-1850, 1FGL J0038.0+1236, and 1FGL J0157.0-5259, which were then more deeply observed with Suzaku as a part of an AO-7 program in 2012. We successfully detected an X-ray counterpart for each source whose X-ray spectra were well fitted by a single power-law function. The positional coincidence with a bright radio counterpart (currently identified as an active galactic nucleus, AGN) in the 2FGL error circles suggests these sources are definitely the X-ray emission from the same AGN, but their SEDs show a wide variety of behavior. In particular, the SED of 1FGL J0038.0+1236 is not easily explained by conventional emission models of blazars. The source 1FGL J0022.2-1850 may be in a transition state between a low-frequency peaked and a high-frequency peaked BL Lac object, and 1FGL J0157.0-5259 could be a rare kind of extreme blazar. We discuss the possible nature of these three sources observed with Suzaku, together with the X-ray identification results and SEDs of all 134 sources observed with the Swift/XRT.

  16. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  17. The Net Enabled Waste Management Database as an international source of radioactive waste management information

    International Nuclear Information System (INIS)

    Csullog, G.W.; Friedrich, V.; Miaw, S.T.W.; Tonkay, D.; Petoe, A.

    2002-01-01

    The IAEA's Net Enabled Waste Management Database (NEWMDB) is an integral part of the IAEA's policies and strategy related to the collection and dissemination of information, both internal to the IAEA in support of its activities and external to the IAEA (publicly available). The paper highlights the NEWMDB's role in relation to the routine reporting of status and trends in radioactive waste management, in assessing the development and implementation of national systems for radioactive waste management, in support of a newly developed indicator of sustainable development for radioactive waste management, in support of reporting requirements for the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management, in support of IAEA activities related to the harmonization of waste management information at the national and international levels and in relation to the management of spent/disused sealed radioactive sources. (author)

  18. Tropospheric Ozone Assessment Report: Database and Metrics Data of Global Surface Ozone Observations

    Directory of Open Access Journals (Sweden)

    Martin G. Schultz

    2017-10-01

    Full Text Available In support of the first Tropospheric Ozone Assessment Report (TOAR a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues, which are part of this special feature. Cooperation among many data centers and individual researchers worldwide made it possible to build the world's largest collection of 'in-situ' hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. Considerable effort was made to harmonize and synthesize data formats and metadata information from various networks and individual data submissions. Extensive quality control was applied to identify questionable and erroneous data, including changes in apparent instrument offsets or calibrations. Such data were excluded from TOAR data products. Limitations of 'a posteriori' data quality assurance are discussed. As a result of the work presented here, global coverage of surface ozone data for scientific analysis has been significantly extended. Yet, large gaps remain in the surface

  19. CHANDRA OBSERVATION OF THE TeV SOURCE HESS J1834-087

    International Nuclear Information System (INIS)

    Misanovic, Zdenka; Kargaltsev, Oleg; Pavlov, George G.

    2011-01-01

    Chandra ACIS observed the field of the extended TeV source HESS J1834-087 for 47 ks. A previous XMM-Newton EPIC observation of the same field revealed a point-like source (XMMU J183435.3-084443) and an offset region of faint extended emission. In the low-resolution, binned EPIC images the two appear to be connected. However, the high-resolution Chandra ACIS images do not support the alleged connection. In these images, XMMU J183435.3-084443 is resolved into a point source, CXOU J183434.9-084443 (L 0.5-8keV ≅ 2.3 x 10 33 erg s -1 , for a distance of 4 kpc; photon index Γ ≅ 1.1), and a compact (∼ 0.5-8keV ≅ 4.1 x 10 33 erg s -1 , Γ ≅ 2.7). The nature of the nebula is uncertain. We discuss a dust scattering halo and a pulsar-wind nebula as possible interpretations. Based on our analysis of the X-ray data, we re-evaluate the previously suggested interpretations of HESS J1834-087 and discuss a possible connection to the Fermi Large Area Telescope source 1FGL J1834.3-0842c. We also obtained an upper limit of 3 x 10 -14 erg cm -2 s -1 on the unabsorbed flux of the SGR J1833-0832 (in quiescence), which happened to be in the ACIS field of view.

  20. An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

    KAUST Repository

    Asiri, Sharefa M.

    2013-05-25

    Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

  1. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  2. Emission & Generation Resource Integrated Database (eGRID)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions & Generation Resource Integrated Database (eGRID) is an integrated source of data on environmental characteristics of electric power generation....

  3. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  4. Database for waste glass composition and properties

    International Nuclear Information System (INIS)

    Peters, R.D.; Chapman, C.C.; Mendel, J.E.; Williams, C.G.

    1993-09-01

    A database of waste glass composition and properties, called PNL Waste Glass Database, has been developed. The source of data is published literature and files from projects funded by the US Department of Energy. The glass data have been organized into categories and corresponding data files have been prepared. These categories are glass chemical composition, thermal properties, leaching data, waste composition, glass radionuclide composition and crystallinity data. The data files are compatible with commercial database software. Glass compositions are linked to properties across the various files using a unique glass code. Programs have been written in database software language to permit searches and retrievals of data. The database provides easy access to the vast quantities of glass compositions and properties that have been studied. It will be a tool for researchers and others investigating vitrification and glass waste forms

  5. The Danish Bladder Cancer Database

    Directory of Open Access Journals (Sweden)

    Hansen E

    2016-10-01

    monitor treatment and mortality. In the future, DaBlaCa-data will be a valuable data source and expansive observational studies on BC will be available. Keywords: bladder cancer, cystectomy, neoadjuvant chemotherapy, curative-intended radiation therapy

  6. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  7. Data integration and knowledge discovery in biomedical databases. Reliable information from unreliable sources

    Directory of Open Access Journals (Sweden)

    A Mitnitski

    2003-01-01

    Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.

  8. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  9. Overview of Historical Earthquake Document Database in Japan and Future Development

    Science.gov (United States)

    Nishiyama, A.; Satake, K.

    2014-12-01

    In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami

  10. The Chandra Source Catalog 2.0: Interfaces

    Science.gov (United States)

    D'Abrusco, Raffaele; Zografou, Panagoula; Tibbetts, Michael; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Van Stone, David W.

    2018-01-01

    Easy-to-use, powerful public interfaces to access the wealth of information contained in any modern, complex astronomical catalog are fundamental to encourage its usage. In this poster,I present the public interfaces of the second Chandra Source Catalog (CSC2). CSC2 is the most comprehensive catalog of X-ray sources detected by Chandra, thanks to the inclusion of Chandra observations public through the end of 2014 and to methodological advancements. CSC2 provides measured properties for a large number of sources that sample the X-ray sky at fainter levels than the previous versions of the CSC, thanks to the stacking of single overlapping observations within 1’ before source detection. Sources from stacks are then crossmatched, if multiple stacks cover the same area of the sky, to create a list of unique, optimal CSC2 sources. The properties of sources detected in each single stack and each single observation are also measured. The layered structure of the CSC2 catalog is mirrored in the organization of the CSC2 database, consisting of three tables containing all properties for the unique stacked sources (“Master Source”), single stack sources (“Stack Source”) and sources in any single observation (“Observation Source”). These tables contain estimates of the position, flags, extent, significances, fluxes, spectral properties and variability (and associated errors) for all classes of sources. The CSC2 also includes source region and full-field data products for all master sources, stack sources and observation sources: images, photon event lists, light curves and spectra.CSCview, the main interface to the CSC2 source properties and data products, is a GUI tool that allows to build queries based on the values of all properties contained in CSC2 tables, query the catalog, inspect the returned table of source properties, browse and download the associated data products. I will also introduce the suite of command-line interfaces to CSC2 that can be used in

  11. Multifrequency VLA observations of PKS 0745-191: the archetypal 'cooling flow' radio source?

    International Nuclear Information System (INIS)

    Baum, S.A.; O'Dea, C.P.

    1991-01-01

    We present 90-, 20-, 6- and 2-cm VLA observations of the high radio luminosity, cooling flow radio source PKS 0745-191. We find that the radio source has a core with a very steep spectrum and diffuse emission with an even steeper spectrum without clear indications of the jets, hotspots or double lobes found in other radio sources of comparable luminosity. The appearance of the source is highly dependent on frequency and resolution. This dependence reflects both the diffuse nature of the extended emission and the steep, but position-dependent, spectrum of the radio emission. (author)

  12. Completeness of metabolic disease recordings in Nordic national databases for dairy cows.

    Science.gov (United States)

    Espetvedt, M N; Wolff, C; Rintakoski, S; Lind, A; Østerås, O

    2012-06-01

    The four Nordic countries Denmark (DK), Finland (FI), Norway (NO) and Sweden (SE) all have national databases where diagnostic events in dairy cows are recorded. Comparing and looking at differences in disease occurrence between countries may give information on factors that influence disease occurrence, optimal diseases control and treatment strategies. For such comparisons to be valid, the data in these databases should be standardised and of good quality. The objective of the study presented here was to assess the quality of metabolic disease recordings, primarily milk fever and ketosis, in four Nordic national databases. Completeness of recording figures of database registrations at two different levels was chosen as a measure of data quality. Firstly, completeness of recording of all disease events on a farm regardless of veterinary involvement, called 'Farmer observed completeness', was determined. Secondly, completeness of recording of veterinary treated disease events only, called 'Veterinary treated completeness', was determined. To collect data for calculating these completeness levels a simple random sample of herds was obtained in each country. Farmers who were willing to participate, recorded for 4 months in 2008, on a purpose made registration form, any observed illness in cows, regardless of veterinary involvement. The number of participating herds was 105, 167, 179 and 129 in DK, FI, NO and SE respectively. In total these herds registered 247, 248, 177 and 218 metabolic events for analysis in DK, FI, NO and SE, respectively. Data from national databases were subsequently extracted, and the two sources of data were matched to find the proportion, or completeness, of diagnostic events registered by farmers that also existed in national databases. Matching was done using a common diagnostic code system and allowed for a discrepancy of 7 days for registered date of the event. For milk fever, the Farmer observed completeness was 77%, 67%, 79% and 79

  13. Quality Assurance Source Requirements Traceability Database

    International Nuclear Information System (INIS)

    MURTHY, R.; NAYDENOVA, A.; DEKLEVER, R.; BOONE, A.

    2006-01-01

    At the Yucca Mountain Project the Project Requirements Processing System assists in the management of relationships between regulatory and national/industry standards source criteria, and Quality Assurance Requirements and Description document (DOE/R W-0333P) requirements to create compliance matrices representing respective relationships. The matrices are submitted to the U.S. Nuclear Regulatory Commission to assist in the commission's review, interpretation, and concurrence with the Yucca Mountain Project QA program document. The tool is highly customized to meet the needs of the Office of Civilian Radioactive Waste Management Office of Quality Assurance

  14. Sources of Free and Open Source Spatial Data for Natural Disasters and Principles for Use in Developing Country Contexts

    Science.gov (United States)

    Taylor, Faith E.; Malamud, Bruce D.; Millington, James D. A.

    2016-04-01

    Access to reliable spatial and quantitative datasets (e.g., infrastructure maps, historical observations, environmental variables) at regional and site specific scales can be a limiting factor for understanding hazards and risks in developing country settings. Here we present a 'living database' of >75 freely available data sources relevant to hazard and risk in Africa (and more globally). Data sources include national scientific foundations, non-governmental bodies, crowd-sourced efforts, academic projects, special interest groups and others. The database is available at http://tinyurl.com/africa-datasets and is continually being updated, particularly in the context of broader natural hazards research we are doing in the context of Malawi and Kenya. For each data source, we review the spatiotemporal resolution and extent and make our own assessments of reliability and usability of datasets. Although such freely available datasets are sometimes presented as a panacea to improving our understanding of hazards and risk in developing countries, there are both pitfalls and opportunities unique to using this type of freely available data. These include factors such as resolution, homogeneity, uncertainty, access to metadata and training for usage. Based on our experience, use in the field and grey/peer-review literature, we present a suggested set of guidelines for using these free and open source data in developing country contexts.

  15. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    Science.gov (United States)

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  16. The utility of satellite observations for constraining fine-scale and transient methane sources

    Science.gov (United States)

    Turner, A. J.; Jacob, D.; Benmergui, J. S.; Brandman, J.; White, L.; Randles, C. A.

    2017-12-01

    Resolving differences between top-down and bottom-up emissions of methane from the oil and gas industry is difficult due, in part, to their fine-scale and often transient nature. There is considerable interest in using atmospheric observations to detect these sources. Satellite-based instruments are an attractive tool for this purpose and, more generally, for quantifying methane emissions on fine scales. A number of instruments are planned for launch in the coming years from both low earth and geostationary orbit, but the extent to which they can provide fine-scale information on sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) exploring the tradeoffs between pixel resolution, measurement frequency, and instrument precision on the fine-scale information content of a space-borne instrument measuring methane. We use the WRF-STILT Lagrangian transport model to generate more than 200,000 column footprints at 1.3×1.3 km2 spatial resolution and hourly temporal resolution over the Barnett Shale in Texas. We sub-sample these footprints to match the observing characteristics of the planned TROPOMI and GeoCARB instruments as well as different hypothetical observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its singular values. We draw conclusions on the capabilities of the planned satellite instruments and how these capabilities could be improved for fine-scale source detection.

  17. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  18. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Hargrove, Stephanie [ORNL; Chin, Shih-Miao [ORNL; Wilson, Daniel W [ORNL; Taylor, Rob D [ORNL; Davidson, Diane [ORNL

    2016-09-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) and FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction

  19. CardioTF, a database of deconstructing transcriptional circuits in the heart system.

    Science.gov (United States)

    Zhen, Yisong

    2016-01-01

    Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs), position weight matrices (PWMs), and enhancer sequences discovered using the ChIP-seq method. The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL). In-house R scripts were written to analyze and visualize the results. Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Database URL: http://www.cardiosignal.org/database/cardiotf.html.

  20. ALMA observations of lensed Herschel sources: testing the dark matter halo paradigm

    Science.gov (United States)

    Amvrosiadis, A.; Eales, S. A.; Negrello, M.; Marchetti, L.; Smith, M. W. L.; Bourne, N.; Clements, D. L.; De Zotti, G.; Dunne, L.; Dye, S.; Furlanetto, C.; Ivison, R. J.; Maddox, S. J.; Valiante, E.; Baes, M.; Baker, A. J.; Cooray, A.; Crawford, S. M.; Frayer, D.; Harris, A.; Michałowski, M. J.; Nayyeri, H.; Oliver, S.; Riechers, D. A.; Serjeant, S.; Vaccari, M.

    2018-04-01

    With the advent of wide-area submillimetre surveys, a large number of high-redshift gravitationally lensed dusty star-forming galaxies have been revealed. Because of the simplicity of the selection criteria for candidate lensed sources in such surveys, identified as those with S500 μm > 100 mJy, uncertainties associated with the modelling of the selection function are expunged. The combination of these attributes makes submillimetre surveys ideal for the study of strong lens statistics. We carried out a pilot study of the lensing statistics of submillimetre-selected sources by making observations with the Atacama Large Millimeter Array (ALMA) of a sample of strongly lensed sources selected from surveys carried out with the Herschel Space Observatory. We attempted to reproduce the distribution of image separations for the lensed sources using a halo mass function taken from a numerical simulation that contains both dark matter and baryons. We used three different density distributions, one based on analytical fits to the haloes formed in the EAGLE simulation and two density distributions [Singular Isothermal Sphere (SIS) and SISSA] that have been used before in lensing studies. We found that we could reproduce the observed distribution with all three density distributions, as long as we imposed an upper mass transition of ˜1013 M⊙ for the SIS and SISSA models, above which we assumed that the density distribution could be represented by a Navarro-Frenk-White profile. We show that we would need a sample of ˜500 lensed sources to distinguish between the density distributions, which is practical given the predicted number of lensed sources in the Herschel surveys.

  1. Heat sources for bright-rimmed molecular clouds: CO observations of NGC 7822

    International Nuclear Information System (INIS)

    Elmegreen, B.G.; Dickinson, D.F.; Lada, C.J.

    1978-01-01

    Observations of the 2.6 mm carbon monoxide line in the bright rim NGC 7822 reveal that the peak excitation and column density of the molecule lie in a ridge ahead of the ionization front. Several possibilities for the excitation of this ridge are discussed. Cosmic rays are shown to provide an excellent heat source for Bok globules, but they can account for only approx.20% of the required heating in NGC 7822. Direct shock or compressional heating of the gas could be adequate only if the pressure inside the cloud is much larger than the thermal pressure. If, in fact, this internal pressure is determined by the source of line broadening (e.g., magnetic fields or turbulence), then shock or compressional heating could be important, and pressure equilibrium may exist between the neutral cloud and the bright rim. Heating by warm grains or by the photoelectric effect is also considered, but such mechanisms are probably not important if the only source of radiation is external to the cloud. This is primarily a result of the low cloud density (approx.10 3 cm -3 ) inferred from our observations. The extent to which unknown embedded stars may provide the required gaseous heating cannot be estimated from our observations of NGC 7822.An interesting and new heat source is suggested which may have important applications to bright-rimmed clouds or to any other predominantly neutral clouds that may have undergone some recent compression. We suggest that the heat input to neutral gas due to the relaxation of internal magnetic fields will be greatly enhanced during cloud compression (with or without a shock). We show that the power input to the gas will increase more with increasing density than will the cooling rate. As a result, cloud compression can lead to an increase in the gas temperature for a period lasting several million years, which is the decay time of the compressed field. The observed ridge in NGC 7822 may be due to stimulated release of internal magnetic energy

  2. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave

    Directory of Open Access Journals (Sweden)

    Ikaro Silva

    2014-09-01

    Full Text Available The WaveForm DataBase (WFDB Toolbox for MATLAB/Octave enables  integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox allows direct loading into MATLAB/Octave's workspace of over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by meta data such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  3. A Web-based Tool for SDSS and 2MASS Database Searches

    Science.gov (United States)

    Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.

    We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.

  4. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  5. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  6. Digital Dental X-ray Database for Caries Screening

    Science.gov (United States)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  7. CHANDRA OBSERVATIONS OF 3C RADIO SOURCES WITH z < 0.3. II. COMPLETING THE SNAPSHOT SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Massaro, F. [SLAC National Laboratory and Kavli Institute for Particle Astrophysics and Cosmology, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Tremblay, G. R. [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching bei Muenchen (Germany); Harris, D. E.; O' Dea, C. P. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Kharb, P.; Axon, D. [Department of Physics, Rochester Institute of Technology, Carlson Center for Imaging Science 76-3144, 84 Lomb Memorial Dr., Rochester, NY 14623 (United States); Balmaverde, B.; Capetti, A. [INAF-Osservatorio Astrofisico di Torino, Strada Osservatorio 20, I-10025 Pino Torinese (Italy); Baum, S. A. [Carlson Center for Imaging Science 76-3144, 84 Lomb Memorial Dr., Rochester, NY 14623 (United States); Chiaberge, M.; Macchetto, F. D.; Sparks, W. [Space Telescope Science Institute, 3700 San Martine Drive, Baltimore, MD 21218 (United States); Gilli, R. [INAF-Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Giovannini, G. [INAF-Istituto di Radioastronomia di Bologna, Via Gobetti 101, I-40129 Bologna (Italy); Grandi, P.; Torresi, E. [INAF-IASF-Istituto di Astrofisica Spaziale e fisica Cosmica di Bologna, Via P. Gobetti 101, I-40129 Bologna (Italy); Risaliti, G. [INAF-Osservatorio Astronomico di Arcetri, Largo E. Fermi 5, I-50125 Firenze (Italy)

    2012-12-15

    We report on the second round of Chandra observations of the 3C snapshot survey developed to observe the complete sample of 3C radio sources with z < 0.3 for 8 ks each. In the first paper, we illustrated the basic data reduction and analysis procedures performed for the 30 sources of the 3C sample observed during Chandra Cycle 9, while here we present the data for the remaining 27 sources observed during Cycle 12. We measured the X-ray intensity of the nuclei and of any radio hot spots and jet features with associated X-ray emission. X-ray fluxes in three energy bands, i.e., soft, medium, and hard, for all the sources analyzed are also reported. For the stronger nuclei, we also applied the standard spectral analysis, which provides the best-fit values of the X-ray spectral index and absorbing column density. In addition, a detailed analysis of bright X-ray nuclei that could be affected by pile-up has been performed. X-ray emission was detected for all the nuclei of the radio sources in our sample except for 3C 319. Among the current sample, there are two compact steep spectrum radio sources, two broad-line radio galaxies, and one wide angle tail radio galaxy, 3C 89, hosted in a cluster of galaxies clearly visible in our Chandra snapshot observation. In addition, we also detected soft X-ray emission arising from the galaxy cluster surrounding 3C 196.1. Finally, X-ray emission from hot spots has been found in three FR II radio sources and, in the case of 3C 459, we also report the detection of X-ray emission associated with the eastern radio lobe as well as X-ray emission cospatial with radio jets in 3C 29 and 3C 402.

  8. Source of the 26Al observed in the interstellar medium

    International Nuclear Information System (INIS)

    Dearborn, D.S.P.; Blake, J.B.

    1985-01-01

    Recent HEAO 3 observations have been interpreted by Mahoney and colleagues as requiring approximately 3 M/sub sun/ of 26 Al alive in the interstellar medium. Calculations briefly discussed in this Letter indicate that there is substantial production and dispersal of 26 Al in the stellar winds of O and W-R stars and suggest that the stellar winds of very massive stars are a significant source of 26 Al

  9. Observational constraints on the physical nature of submillimetre source multiplicity: chance projections are common

    Science.gov (United States)

    Hayward, Christopher C.; Chapman, Scott C.; Steidel, Charles C.; Golob, Anneya; Casey, Caitlin M.; Smith, Daniel J. B.; Zitrin, Adi; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Coppin, Kristen E. K.; Farrah, Duncan; Ibar, Eduardo; Michałowski, Michał J.; Sawicki, Marcin; Scott, Douglas; van der Werf, Paul; Fazio, Giovanni G.; Geach, James E.; Gurwell, Mark; Petitpas, Glen; Wilner, David J.

    2018-05-01

    Interferometric observations have demonstrated that a significant fraction of single-dish submillimetre (submm) sources are blends of multiple submm galaxies (SMGs), but the nature of this multiplicity, i.e. whether the galaxies are physically associated or chance projections, has not been determined. We performed spectroscopy of 11 SMGs in six multicomponent submm sources, obtaining spectroscopic redshifts for nine of them. For an additional two component SMGs, we detected continuum emission but no obvious features. We supplement our observed sources with four single-dish submm sources from the literature. This sample allows us to statistically constrain the physical nature of single-dish submm source multiplicity for the first time. In three (3/7, { or} 43^{+39 }_{ -33} {per cent at 95 {per cent} confidence}) of the single-dish sources for which the nature of the blending is unambiguous, the components for which spectroscopic redshifts are available are physically associated, whereas 4/7 (57^{+33 }_{ -39} per cent) have at least one unassociated component. When components whose spectra exhibit continuum but no features and for which the photometric redshift is significantly different from the spectroscopic redshift of the other component are also considered, 6/9 (67^{+26 }_{ -37} per cent) of the single-dish sources are comprised of at least one unassociated component SMG. The nature of the multiplicity of one single-dish source is ambiguous. We conclude that physically associated systems and chance projections both contribute to the multicomponent single-dish submm source population. This result contradicts the conventional wisdom that bright submm sources are solely a result of merger-induced starbursts, as blending of unassociated galaxies is also important.

  10. Using Large Diabetes Databases for Research.

    Science.gov (United States)

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  11. The Brainomics/Localizer database.

    Science.gov (United States)

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Annual seminar on electronic sources of information

    International Nuclear Information System (INIS)

    Ravichandra Rao, I.K.

    2000-03-01

    With the rapid development in IT and the emergence of Internet, a multitude of information sources are now available on electronic media. They include e-journals and other electronic publications - online databases, reference documents, newspapers, magazines, etc. In addition to these online sources, there are thousands of CD-ROM databases. The CD-ROM databases and the online sources are collectively referred to as electronic sources of information. Libraries in no part of the world can afford to ignore these sources. Emergence of these new sources has resulted into a change in the traditional library functions including collection development, acquisitions, cataloguing, user instructions, etc. It is inevitable that in the next five to ten years, special libraries may have to allocate considerable amount towards subscription of e-journals and other e-publications. The papers in this seminar volume discuss several aspects related the theme of the seminar and cover e-journals, different sources available in the Net, classification of electronic sources, online public access catalogues, and different aspects of Internet. Papers relevant to INIS are indexed separately

  13. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  14. Toxic Substances Control Act test submissions database (TSCATS) - comprehensive update. Data file

    International Nuclear Information System (INIS)

    1993-01-01

    The Toxic Substances Control Act Test Submissions Database (TSCATS) was developed to make unpublished test data available to the public. The test data is submitted to the U.S. Environmental Protection Agency by industry under the Toxic Substances Control Act. Test is broadly defined to include case reports, episodic incidents, such as spills, and formal test study presentations. The database allows searching of test submissions according to specific chemical identity or type of study when used with an appropriate search retrieval software program. Studies are indexed under three broad subject areas: health effects, environmental effects and environmental fate. Additional controlled vocabulary terms are assigned which describe the experimental protocol and test observations. Records identify reference information needed to locate the source document, as well as the submitting organization and reason for submission of the test data

  15. DisFace: A Database of Human Facial Disorders

    Directory of Open Access Journals (Sweden)

    Paramjit Kaur

    2017-10-01

    Full Text Available Face is an integral part of human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. In the past few decades, human face has gained attention of several researchers, whether it is related to facial anthropometry, facial disorder, face transplantation or face reconstruction. Several researches have also shown the correlation between neuropsychiatry disorders and human face and also that how face recognition abilities are correlated with these disorders. Currently, several databases exist which contain the facial images of several individuals captured from different sources. The advantage of these databases is that the images in these databases can be used for testing and training purpose. However, in current date no such database exists which would provide not only facial images of individuals; but also the literature concerning the human face, list of several genes controlling human face, list of facial disorders and various tools which work on facial images. Thus, the current research aims at developing a database of human facial disorders using bioinformatics approach. The database will contain information about facial diseases, medications, symptoms, findings, etc. The information will be extracted from several other databases like OMIM, PubChem, Radiopedia, Medline Plus, FDA, etc. and links to them will also be provided. Initially, the diseases specific for human face have been obtained from already created published corpora of literature using text mining approach. Becas tool was used to obtain the specific task.  A dataset will be created and stored in the form of database. It will be a database containing cross-referenced index of human facial diseases, medications, symptoms, signs, etc. Thus, a database on human face with complete existing information about human facial disorders will be developed. The novelty of the

  16. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  17. Advanced techniques for high resolution spectroscopic observations of cosmic gamma-ray sources

    International Nuclear Information System (INIS)

    Matteson, J.L.; Pelling, M.R.; Peterson, L.E.

    1985-08-01

    We describe an advanced gamma-ray spectrometer that is currently in development. It will obtain a sensitivity of -4 ph/cm -2 -sec in a 6 hour balloon observation and uses innovative techniques for background reduction and source imaging

  18. FCDD: A Database for Fruit Crops Diseases.

    Science.gov (United States)

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  19. CHID: a unique health information and education database.

    OpenAIRE

    Lunin, L F; Stein, R S

    1987-01-01

    The public's growing interest in health information and the health professions' increasing need to locate health education materials can be answered in part by the new Combined Health Information Database (CHID). This unique database focuses on materials and programs in professional and patient education, general health education, and community risk reduction. Accessible through BRS, CHID suggests sources for procuring brochures, pamphlets, articles, and films on community services, programs ...

  20. An Intelligent Assistant for Construction of Terrain Databases

    OpenAIRE

    Rowe, Neil C.; Reed, Chris; Jackson, Leroy; Baer, Wolfgang

    1998-01-01

    1998 Command and Control Research and Technology Symposium, Monterey CA, June 1998, 481-486. We describe TELLUSPLAN, an intelligent assistant for the problem of bargaining between user goals and system resources in the integration of terrain databases from separate source databases. TELLUSPLAN uses nondeterministic methods from artificial intelligence and a detailed cost model to infer the most reasonable compromise with the user's needs. Supported by the Army Artificial Int...

  1. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  2. The IAEA's Illicit Trafficking Database Programme

    International Nuclear Information System (INIS)

    Anzelon, G.; Hammond, W.; Nicholas, M.

    2001-01-01

    Full text: As part of its overall programme on nuclear material security, the IAEA has since 1995 maintained a database of incidents of trafficking in nuclear materials and other radioactive sources. The Illicit Trafficking Database Programme (ITDP) is intended to assist Member States by alerting them to current incidents, by facilitating exchange of reliable, detailed information about incidents, and by identifying any common threads or trends that might assist States in combating illicit trafficking. The ITDP also seeks to better inform the public by providing basic information to the media concerning illicit trafficking events. Approximately 70 States have joined this programme for collecting and sharing information on trafficking incidents. Reporting States have the opportunity to designate what information may be shared with other States and what may be shared with the public. In cases where the IAEA's first information about a possible incident comes from news media or other open sources rather than from a State notification, the information first is evaluated, and then, if warranted, the relevant State or States are contacted to request confirmation or clarification of an alleged incident. During 2000, as a result of experience gained working with information on illicit nuclear trafficking, the IAEA developed of a flexible and comprehensive new database system. The new system has an open architecture that accommodates structured information from States, in-house information, open-source articles, and other information sources, such as pictures, maps and web links. The graphical user interface allows data entry, maintenance and standard and ad-hoc reporting. The system also is linked to a Web-based query engine, which enables searching of both structured and open-source information. For the period 1 January 1993 through 31 March 2001, the database recorded more than 550 incidents, of which about two-thirds have been confirmed by States. (Some of these

  3. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.

  4. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  5. SPITZER OBSERVATIONS OF GX17+2: CONFIRMATION OF A PERIODIC SYNCHROTRON SOURCE

    International Nuclear Information System (INIS)

    Harrison, Thomas E.; McNamara, Bernard J.; Bornak, Jillian; Gelino, Dawn M.; Wachter, Stefanie; Rupen, Michael P.; Gelino, Christopher R.

    2011-01-01

    GX17+2 is a low-mass X-ray binary (LMXB) that is also a member of a small family of LMXBs known as 'Z-sources' that are believed to have persistent X-ray luminosities that are very close to the Eddington limit. GX17+2 is highly variable at both radio and X-ray frequencies, a feature common to Z-sources. What sets GX17+2 apart is its dramatic variability in the near-infrared, where it changes by ΔK ∼ 3 mag. Previous investigations have shown that these brightenings are periodic, recurring every 3.01 days. Given its high extinction (A V ≥ 9 mag), it has not been possible to ascertain the nature of these events with ground-based observations. We report mid-infrared Spitzer observations of GX17+2 which indicate a synchrotron spectrum for the infrared brightenings. In addition, GX17+2 is highly variable in the mid-infrared during these events. The combination of the large-scale outbursts, the presence of a synchrotron spectrum, and the dramatic variability in the mid-infrared suggest that the infrared brightening events are due to the periodic transit of a synchrotron jet across our line of sight. An analysis of both new, and archival, infrared observations has led us to revise the period for these events to 3.0367 days. We also present new Rossi X-Ray Timing Explorer (RXTE) data for GX17+2 obtained during two predicted infrared brightening events. Analysis of these new data, and data from the RXTE archive, indicates that there is no correlation between the X-ray behavior of this source and the observed infrared brightenings. We examine various scenarios that might produce periodic jet emission.

  6. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  7. Healthcare databases in Europe for studying medicine use and safety during pregnancy

    DEFF Research Database (Denmark)

    Charlton, Rachel A; Neville, Amanda J; Jordan, Sue

    2014-01-01

    data recorded by primary-care practitioners. All databases captured maternal co-prescribing and a measure of socioeconomic status. CONCLUSION: This study suggests that within Europe, electronic healthcare databases may be valuable sources of data for evaluating medicine use and safety during pregnancy......PURPOSE: The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. METHODS....... The suitability of a particular database, however, will depend on the research question, the type of medicine to be evaluated, the prevalence of its use and any adverse outcomes of interest. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd....

  8. High-resolution observations of low-luminosity gigahertz-peaked spectrum and compact steep-spectrum sources

    Science.gov (United States)

    Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.

    2018-06-01

    We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.

  9. Interactive bibliographical database on color

    Science.gov (United States)

    Caivano, Jose L.

    2002-06-01

    The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.

  10. The Chandra Source Catalog: Storage and Interfaces

    Science.gov (United States)

    van Stone, David; Harbo, Peter N.; Tibbetts, Michael S.; Zografou, Panagoula; Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is part of the Chandra Data Archive (CDA) at the Chandra X-ray Center. The catalog contains source properties and associated data objects such as images, spectra, and lightcurves. The source properties are stored in relational databases and the data objects are stored in files with their metadata stored in databases. The CDA supports different versions of the catalog: multiple fixed release versions and a live database version. There are several interfaces to the catalog: CSCview, a graphical interface for building and submitting queries and for retrieving data objects; a command-line interface for property and source searches using ADQL; and VO-compliant services discoverable though the VO registry. This poster describes the structure of the catalog and provides an overview of the interfaces.

  11. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  12. Benzene observations and source appointment in a region of oil and natural gas development

    Science.gov (United States)

    Halliday, Hannah Selene

    Benzene is a primarily anthropogenic volatile organic compound (VOC) with a small number of well characterized sources. Atmospheric benzene affects human health and welfare, and low level exposure (Atmospheric Observatory (PAO) in Colorado to investigate how O&NG development impacts air quality within the Wattenburg Gas Field (WGF) in the Denver-Julesburg Basin. The measurements were carried out in July and August 2014 as part of NASA's DISCOVER-AQ field campaign. The PTR-QMS data were supported by pressurized whole air canister samples and airborne vertical and horizontal surveys of VOCs. Unexpectedly high benzene mixing ratios were observed at PAO at ground level (mean benzene = 0.53 ppbv, maximum benzene = 29.3 ppbv), primarily at night (mean nighttime benzene = 0.73 ppbv). These high benzene levels were associated with southwesterly winds. The airborne measurements indicate that benzene originated from within the WGF, and typical source signatures detected in the canister samples implicate emissions from O&NG activities rather than urban vehicular emissions as primary benzene source. This conclusion is backed by a regional toluene-to-benzene ratio analysis which associated southerly flow with vehicular emissions from the Denver area. Weak benzene-to-CO correlations confirmed that traffic emissions were not responsible for the observed high benzene levels. Previous measurements at the Boulder Atmospheric Observatory (BAO) and our data obtained at PAO allow us to locate the source of benzene enhancements between the two atmospheric observatories. Fugitive emissions of benzene from O&NG operations in the Platteville area are discussed as the most likely causes of enhanced benzene levels at PAO. A limited information source attribution with the PAO dataset was completed using the EPA's positive matrix factorization (PMF) source receptor model. Six VOCs from the PTR-QMS measurement were used along with CO and NO for a total of eight chemical species. Six sources

  13. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  14. The STRING database in 2017

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Morris, John H; Cook, Helen

    2017-01-01

    A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organi......A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number...... of organisms. The associations in STRING include direct (physical) interactions, as well as indirect (functional) interactions, as long as both are specific and biologically meaningful. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known...... pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text-mining of the scientific literature and (iv) computational transfer...

  15. Learning lessons from Natech accidents - the eNATECH accident database

    Science.gov (United States)

    Krausmann, Elisabeth; Girgin, Serkan

    2016-04-01

    When natural hazards impact industrial facilities that house or process hazardous materials, fires, explosions and toxic releases can occur. This type of accident is commonly referred to as Natech accident. In order to prevent the recurrence of accidents or to better mitigate their consequences, lessons-learned type studies using available accident data are usually carried out. Through post-accident analysis, conclusions can be drawn on the most common damage and failure modes and hazmat release paths, particularly vulnerable storage and process equipment, and the hazardous materials most commonly involved in these types of accidents. These analyses also lend themselves to identifying technical and organisational risk-reduction measures that require improvement or are missing. Industrial accident databases are commonly used for retrieving sets of Natech accident case histories for further analysis. These databases contain accident data from the open literature, government authorities or in-company sources. The quality of reported information is not uniform and exhibits different levels of detail and accuracy. This is due to the difficulty of finding qualified information sources, especially in situations where accident reporting by the industry or by authorities is not compulsory, e.g. when spill quantities are below the reporting threshold. Data collection has then to rely on voluntary record keeping often by non-experts. The level of detail is particularly non-uniform for Natech accident data depending on whether the consequences of the Natech event were major or minor, and whether comprehensive information was available for reporting. In addition to the reporting bias towards high-consequence events, industrial accident databases frequently lack information on the severity of the triggering natural hazard, as well as on failure modes that led to the hazmat release. This makes it difficult to reconstruct the dynamics of the accident and renders the development of

  16. IMPLEMENTATION OF COLUMN-ORIENTED DATABASE IN POSTGRESQL FOR OPTIMIZATION OF READ-ONLY QUERIES

    OpenAIRE

    Aditi D. Andurkar

    2012-01-01

    The era of column-oriented database systems has truly begun with open source database systems like C-Store, MonetDb, LucidDb and commercial ones like Vertica. Column-oriented database stores data column-by-column which means it stores information of single attribute collectively. The need for Column-oriented database arose from the need of business intelligence for efficient decision making where traditional row-oriented database gives poor performance. PostgreSql is an open so...

  17. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  18. Russian Virtual Observatory: Data Sources

    Directory of Open Access Journals (Sweden)

    Malkov O.

    2016-03-01

    Full Text Available The purpose of this review is to analyze main directions of creation and functioning of major data sources developed by Russian astronomers or with their participation and to compare them with the worldwide trends in these fields. We discuss astronomical space missions of the past, present, and future (Astron, INTEGRAL, WSO-UV, Spectrum Roentgen Gamma, Lyra-B, high-quality photometric atlases and catalogues, and spectroscopic data sources, primarily VALD and the global VAMDC framework for the maintenance and distribution of atomic and molecular data. We describe collection, analysis, and dissemination of astronomical data on minor bodies of the Solar System and on variable stars. Also described is the project joining data for all observational types of binary and multiple stars, Binary star DataBase (BDB.

  19. TrSDB: a proteome database of transcription factors

    Science.gov (United States)

    Hermoso, Antoni; Aguilar, Daniel; Aviles, Francesc X.; Querol, Enrique

    2004-01-01

    TrSDB—TranScout Database—(http://ibb.uab.es/trsdb) is a proteome database of eukaryotic transcription factors based upon predicted motifs by TranScout and data sources such as InterPro and Gene Ontology Annotation. Nine eukaryotic proteomes are included in the current version. Extensive and diverse information for each database entry, different analyses considering TranScout classification and similarity relationships are offered for research on transcription factors or gene expression. PMID:14681387

  20. High Energy Nuclear Database: A Testbed for Nuclear Data Information Technology

    International Nuclear Information System (INIS)

    Brown, D A; Vogt, R; Beck, B; Pruet, J

    2007-01-01

    We describe the development of an on-line high-energy heavy-ion experimental database. When completed, the database will be searchable and cross-indexed with relevant publications, including published detector descriptions. While this effort is relatively new, it will eventually contain all published data from older heavy-ion programs as well as published data from current and future facilities. These data include all measured observables in proton-proton, proton-nucleus and nucleus-nucleus collisions. Once in general use, this database will have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models for a broad range of experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion, target and source development for upcoming facilities such as the International Linear Collider and homeland security. This database is part of a larger proposal that includes the production of periodic data evaluations and topical reviews. These reviews would provide an alternative and impartial mechanism to resolve discrepancies between published data from rival experiments and between theory and experiment. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This project serves as a testbed for the further development of an object-oriented nuclear data format and database system. By using ''off-the-shelf'' software tools and techniques, the system is simple, robust, and extensible. Eventually we envision a ''Grand Unified Nuclear Format'' encapsulating data types used in the ENSDF, ENDF/B, EXFOR, NSR and other formats, including processed data formats

  1. Database usage and performance for the Fermilab Run II experiments

    International Nuclear Information System (INIS)

    Bonham, D.; Box, D.; Gallas, E.; Guo, Y.; Jetton, R.; Kovich, S.; Kowalkowski, J.; Kumar, A.; Litvintsev, D.; Lueking, L.; Stanfield, N.; Trumbo, J.; Vittone-Wiersma, M.; White, S.P.; Wicklund, E.; Yasuda, T.; Maksimovic, P.

    2004-01-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databases used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described

  2. Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts

    Science.gov (United States)

    Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.

    2010-01-01

    We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX

  3. Footprint Database and web services for the Herschel space observatory

    Science.gov (United States)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  4. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  5. Assessment of COPD-related outcomes via a national electronic medical record database.

    Science.gov (United States)

    Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana

    2008-01-01

    The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.

  6. Data integration for plant genomics--exemplars from the integration of Arabidopsis thaliana databases.

    Science.gov (United States)

    Lysenko, Artem; Lysenko, Atem; Hindle, Matthew Morritt; Taubert, Jan; Saqi, Mansoor; Rawlings, Christopher John

    2009-11-01

    The development of a systems based approach to problems in plant sciences requires integration of existing information resources. However, the available information is currently often incomplete and dispersed across many sources and the syntactic and semantic heterogeneity of the data is a challenge for integration. In this article, we discuss strategies for data integration and we use a graph based integration method (Ondex) to illustrate some of these challenges with reference to two example problems concerning integration of (i) metabolic pathway and (ii) protein interaction data for Arabidopsis thaliana. We quantify the degree of overlap for three commonly used pathway and protein interaction information sources. For pathways, we find that the AraCyc database contains the widest coverage of enzyme reactions and for protein interactions we find that the IntAct database provides the largest unique contribution to the integrated dataset. For both examples, however, we observe a relatively small amount of data common to all three sources. Analysis and visual exploration of the integrated networks was used to identify a number of practical issues relating to the interpretation of these datasets. We demonstrate the utility of these approaches to the analysis of groups of coexpressed genes from an individual microarray experiment, in the context of pathway information and for the combination of coexpression data with an integrated protein interaction network.

  7. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    Science.gov (United States)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  8. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, Ross F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  9. Quasars Probing Quasars. X. The Quasar Pair Spectral Database

    Science.gov (United States)

    Findlay, Joseph R.; Prochaska, J. Xavier; Hennawi, Joseph F.; Fumagalli, Michele; Myers, Adam D.; Bartle, Stephanie; Chehade, Ben; DiPompeo, Michael A.; Shanks, Tom; Lau, Marie Wingyee; Rubin, Kate H. R.

    2018-06-01

    The rare close projection of two quasars on the sky provides the opportunity to study the host galaxy environment of a foreground quasar in absorption against the continuum emission of a background quasar. For over a decade the “Quasars probing quasars” series has utilized this technique to further the understanding of galaxy formation and evolution in the presence of a quasar at z > 2, resolving scales as small as a galactic disk and from bound gas in the circumgalactic medium to the diffuse environs of intergalactic space. Presented here is the public release of the quasar pair spectral database utilized in these studies. In addition to projected pairs at z > 2, the database also includes quasar pair members at z useful for small-scale clustering studies. In total, the database catalogs 5627 distinct objects, with 4083 lying within 5‧ of at least one other source. A spectral library contains 3582 optical and near-infrared spectra for 3028 of the cataloged sources. As well as reporting on 54 newly discovered quasar pairs, we outline the key contributions made by this series over the last 10 years, summarize the imaging and spectroscopic data used for target selection, discuss the target selection methodologies, describe the database content, and explore some avenues for future work. Full documentation for the spectral database, including download instructions, is supplied at http://specdb.readthedocs.io/en/latest/.

  10. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    Science.gov (United States)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  11. Bisphosphonate adverse effects, lessons from large databases

    DEFF Research Database (Denmark)

    Abrahamsen, Bo

    2010-01-01

    To review the latest findings on bisphosphonate safety from health databases, in particular sources that can provide incidence rates for stress fractures, osteonecrosis of the jaw (ONJ), atrial fibrillation and gastrointestinal lesions including esophageal cancer. The main focus is on bisphosphon...

  12. DEEP GALEX OBSERVATIONS OF THE COMA CLUSTER: SOURCE CATALOG AND GALAXY COUNTS

    International Nuclear Information System (INIS)

    Hammer, D.; Hornschemeier, A. E.; Miller, N.; Jenkins, L.; Mobasher, B.; Smith, R.; Arnouts, S.; Milliard, B.

    2010-01-01

    We present a source catalog from a deep 26 ks Galaxy Evolution Explorer (GALEX) observation of the Coma cluster in the far-UV (FUV; 1530 A) and near-UV (NUV; 2310 A) wavebands. The observed field is centered ∼0. 0 9 (1.6 Mpc) southwest of the Coma core in a well-studied region of the cluster known as 'Coma-3'. The entire field is located within the apparent virial radius of the Coma cluster, and has optical photometric coverage with Sloan Digital Sky Survey (SDSS) and deep spectroscopic coverage to r ∼ 21. We detect GALEX sources to NUV = 24.5 and FUV = 25.0, which corresponds to a star formation rate of ∼10 -3 M sun yr -1 for galaxies at the distance of Coma. We have assembled a catalog of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically confirmed Coma member galaxies that span a large range of galaxy types from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is ∼80% complete to NUV = 23 and FUV = 23.5. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g., object blends, source confusion, Eddington Bias) that influence the source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is free from source confusion over the UV magnitude range studied here; we estimate that the GALEX pipeline catalogs are

  13. Reexamining Operating System Support for Database Management

    OpenAIRE

    Vasil, Tim

    2003-01-01

    In 1981, Michael Stonebraker [21] observed that database management systems written for commodity operating systems could not effectively take advantage of key operating system services, such as buffer pool management and process scheduling, due to expensive overhead and lack of customizability. The “not quite right” fit between these kernel services and the demands of database systems forced database designers to work around such limitations or re-implement some kernel functionality in user ...

  14. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  15. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  16. Variability Properties of Four Million Sources in the TESS Input Catalog Observed with the Kilodegree Extremely Little Telescope Survey

    Science.gov (United States)

    Oelkers, Ryan J.; Rodriguez, Joseph E.; Stassun, Keivan G.; Pepper, Joshua; Somers, Garrett; Kafka, Stella; Stevens, Daniel J.; Beatty, Thomas G.; Siverd, Robert J.; Lund, Michael B.; Kuhn, Rudolf B.; James, David; Gaudi, B. Scott

    2018-01-01

    The Kilodegree Extremely Little Telescope (KELT) has been surveying more than 70% of the celestial sphere for nearly a decade. While the primary science goal of the survey is the discovery of transiting, large-radii planets around bright host stars, the survey has collected more than 106 images, with a typical cadence between 10–30 minutes, for more than four million sources with apparent visual magnitudes in the approximate range 7TESS Input catalog and the AAVSO Variable Star Index to precipitate the follow-up and classification of each source. The catalog is maintained as a living database on the Filtergraph visualization portal at the URL https://filtergraph.com/kelt_vars.

  17. PathwayAccess: CellDesigner plugins for pathway databases.

    Science.gov (United States)

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  18. Database on veterinary clinical research in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Albrecht, Henning

    2010-07-01

    The aim of the present report is to provide an overview of the first database on clinical research in veterinary homeopathy. Detailed searches in the database 'Veterinary Clinical Research-Database in Homeopathy' (http://www.carstens-stiftung.de/clinresvet/index.php). The database contains about 200 entries of randomised clinical trials, non-randomised clinical trials, observational studies, drug provings, case reports and case series. Twenty-two clinical fields are covered and eight different groups of species are included. The database is free of charge and open to all interested veterinarians and researchers. The database enables researchers and veterinarians, sceptics and supporters to get a quick overview of the status of veterinary clinical research in homeopathy and alleviates the preparation of systematical reviews or may stimulate reproductions or even new studies. 2010 Elsevier Ltd. All rights reserved.

  19. Observations of variable and transient X-ray sources with the Ariel V Sky Survey Experiment

    International Nuclear Information System (INIS)

    Pounds, K.A.; Cooke, B.A.; Ricketts, M.J.; Turner, M.J.; Peacock, A.; Eadie, G.

    1976-01-01

    Results obtained during the first six months in orbit of Aerial V with the Leicester Sky Survey are reviewed. Among 80 sources found by a scan of the Milky Way, 16 are new, and 11 UHURU sources in the scanned region are not detected. Some of these sources may be transient. The light curve of Cen X-3 in a binary cycle shows a dip between phase 0.5 and 0.75, and a secondary maximum at the centre of the dip. The dip and the maximum get progressively weaker in the succeeding cycles. These features are interpreted in terms of the stellar wind accretion model. Cyg X-1 observation for 14 days gives a broad minimum around superior conjunction. Four bright transient sources of nova-like light curves have been observed. The light curves and the spectra are given for TrA X-1 (A1524-62) and Tau X-T (A0535+26). (Auth.)

  20. Observations of VHE γ-Ray Sources with the MAGIC Telescope

    Science.gov (United States)

    Bartko, H.

    2008-10-01

    The MAGIC telescope with its 17m diameter mirror is today the largest operating single-dish Imaging Air Cherenkov Telescope (IACT). It is located on the Canary Island La Palma, at an altitude of 2200m above sea level, as part of the Roque de los Muchachos European Northern Observatory. The MAGIC telescope detects celestial very high energy γ-radiation in the energy band between about 50 GeV and 10 TeV. Since Autumn of 2004 MAGIC has been taking data routinely, observing various objects like supernova remnants (SNRs), γ-ray binaries, Pulsars, Active Galactic Nuclei (AGN) and Gamma-ray Bursts (GRB). We briefly describe the observational strategy, the procedure implemented for the data analysis, and discuss the results for individual sources. An outlook to the construction of the second MAGIC telescope is given.

  1. Construction of Database for Pulsating Variable Stars

    Science.gov (United States)

    Chen, B. Q.; Yang, M.; Jiang, B. W.

    2011-07-01

    A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.

  2. MonetDB: Two Decades of Research in Column-oriented Database Architectures

    NARCIS (Netherlands)

    S. Idreos (Stratos); F.E. Groffen (Fabian); N.J. Nes (Niels); S. Manegold (Stefan); K.S. Mullender (Sjoerd); M.L. Kersten (Martin)

    2012-01-01

    textabstractMonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management

  3. MonetDB: Two Decades of Research in Column-oriented Database Architectures

    NARCIS (Netherlands)

    Idreos, S.; Groffen, F.; Nes, N.; Manegold, S.; Mullender, S.; Kersten, M.

    2012-01-01

    MonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management research,

  4. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  5. Geo-scientific database for research and development purposes

    International Nuclear Information System (INIS)

    Tabani, P.; Mangeot, A.; Crabol, V.; Delage, P.; Dewonck, S.; Auriere, C.

    2012-01-01

    Document available in extended abstract form only. The Research and Development Division must manage, secure and reliable manner, a large number of data from scientific disciplines and diverse means of acquisition (observations, measurements, experiments, etc.). This management is particularly important for the Underground research Laboratory, the source of many recording continuous measurements. Thus, from its conception, Andra has implemented two management tools of scientific information, the 'Acquisition System and Data Management' [SAGD] and GEO database with its associated applications. Beyond its own needs, Andra wants to share its achievements with the scientific community, and it therefore provides the data stored in its databases or samples of rock or water when they are available. Acquisition and Data Management (SAGD) This system manages data from sensors installed at several sites. Some sites are on the surface (piezometric, atmospheric and environmental stations), the other are in the Underground Research Laboratory. This system also incorporates data from experiments in which Andra participates in Mont Terri Laboratory in Switzerland. S.A.G.D fulfils these objectives by: - Make available in real time on a single system, with scientists from Andra but also different partners or providers who need it, all experimental data from measurement points - Displaying the recorded data on temporal windows and specific time step, - Allowing remote control of the experimentations, - Ensuring the traceability of all recorded information, - Ensuring data storage in a data base. S.A.G.D has been deployed in the first experimental drift at -445 m in November 2004. It was subsequently extended to the underground Mont Terri laboratory in Switzerland in 2005, to the entire surface logging network of the Meuse / Haute-Marne Center in 2008 and to the environmental network in 2011. All information is acquired, stored and manage by a software called Geoscope. This software

  6. Development of a data entry auditing protocol and quality assurance for a tissue bank database.

    Science.gov (United States)

    Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L

    2012-03-01

    Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.

  7. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  8. The Ensembl genome database project.

    Science.gov (United States)

    Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M

    2002-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.

  9. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  10. Verification of Data Accuracy in Japan Congenital Cardiovascular Surgery Database Including Its Postprocedural Complication Reports.

    Science.gov (United States)

    Takahashi, Arata; Kumamaru, Hiraku; Tomotaki, Ai; Matsumura, Goki; Fukuchi, Eriko; Hirata, Yasutaka; Murakami, Arata; Hashimoto, Hideki; Ono, Minoru; Miyata, Hiroaki

    2018-03-01

    Japan Congenital Cardiovascluar Surgical Database (JCCVSD) is a nationwide registry whose data are used for health quality assessment and clinical research in Japan. We evaluated the completeness of case registration and the accuracy of recorded data components including postprocedural mortality and complications in the database via on-site data adjudication. We validated the records from JCCVSD 2010 to 2012 containing congenital cardiovascular surgery data performed in 111 facilities throughout Japan. We randomly chose nine facilities for site visit by the auditor team and conducted on-site data adjudication. We assessed whether the records in JCCVSD matched the data in the source materials. We identified 1,928 cases of eligible surgeries performed at the facilities, of which 1,910 were registered (99.1% completeness), with 6 cases of duplication and 1 inappropriate case registration. Data components including gender, age, and surgery time (hours) were highly accurate with 98% to 100% concordance. Mortality at discharge and at 30 and 90 postoperative days was 100% accurate. Among the five complications studied, reoperation was the most frequently observed, with 16 and 21 cases recorded in the database and source materials, respectively, having a sensitivity of 0.67 and a specificity of 0.99. Validation of JCCVSD database showed high registration completeness and high accuracy especially in the categorical data components. Adjudicated mortality was 100% accurate. While limited in numbers, the recorded cases of postoperative complications all had high specificities but had lower sensitivity (0.67-1.00). Continued activities for data quality improvement and assessment are necessary for optimizing the utility of these registries.

  11. The development of large-scale de-identified biomedical databases in the age of genomics-principles and challenges.

    Science.gov (United States)

    Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K

    2018-04-10

    Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.

  12. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  13. Allometric biomass and carbon factors database

    Energy Technology Data Exchange (ETDEWEB)

    Somogyi, Z. [European Commission Joint Research Centre, Ispra (Italy). Institute for Environment and Sustainability]|[Hungarian Forest Research Institute, Budapest (Hungary); Teobaldelli, M.; Federici, S.; Pagliari, V.; Grassi, G.; Seufert, G. [European Commission Joint Research Centre, Ispra (Italy). Institute for Environment and Sustainability; Matteucci, G. [Consiglio Nazionale delle Ricerche, Rende (Italy). Istituto per i Sistemi Agricoli e Forestali del Mediterraneo

    2008-09-30

    DATA clearinghouse. The 'Allometric, Biomass and Carbon factors' database (ABC factors database) was designed to facilitate the estimation of the biomass carbon stocks of forests in order to support the development and the verification of greenhouse gas inventories in the LULUCF sector. The database contains several types of expansion, conversion and combined factors, by various tree species or species groups that can be used to calculate biomass or carbon of forests of Eurasian region from proxy variables (e.g., tree volume) that may come from forest inventories. In addition to the factors, and depending on the information that was available in the cited source, the database indicates: (1) the biomass compartments involved when the factor was developed; and (2) the possible applicability of the factor, e.g. by country or by ecological regions. The applicability of the factors is either suggested by the source itself, or the type of source (e.g. National Greenhouse Gas Inventory Report), or was based on the expert judgement by the compilers of the database. Finally, in order to facilitate the selection of the most appropriate of the data, the web-based interface provides the possibility to compare several factors that may come from different sources.

  14. Developing a stone database for clinical practice.

    Science.gov (United States)

    Turney, Benjamin W; Noble, Jeremy G; Reynard, John M

    2011-09-01

    Our objective was to design an intranet-based database to streamline stone patient management and data collection. The system developers used a rapid development approach that removed the need for laborious and unnecessary documentation, instead focusing on producing a rapid prototype that could then be altered iteratively. By using open source development software and website best practice, the development cost was kept very low in comparison with traditional clinical applications. Information about each patient episode can be entered via a user-friendly interface. The bespoke electronic stone database removes the need for handwritten notes, dictation, and typing. From the database, files may be automatically generated for clinic letters, operation notes. and letters to family doctors. These may be printed or e-mailed from the database. Data may be easily exported for audits, coding, and research. Data collection remains central to medical practice, to improve patient safety, to analyze medical and surgical outcomes, and to evaluate emerging treatments. Establishing prospective data collection is crucial to this process. In the current era, we have the opportunity to embrace available technology to facilitate this process. The database template could be modified for use in other clinics. The database that we have designed helps to provide a modern and efficient clinical stone service.

  15. Review and Comparison of the Search Effectiveness and User Interface of Three Major Online Chemical Databases

    Science.gov (United States)

    Bharti, Neelam; Leonard, Michelle; Singh, Shailendra

    2016-01-01

    Online chemical databases are the largest source of chemical information and, therefore, the main resource for retrieving results from published journals, books, patents, conference abstracts, and other relevant sources. Various commercial, as well as free, chemical databases are available. SciFinder, Reaxys, and Web of Science are three major…

  16. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  17. Nine years of global hydrocarbon emissions based on source inversion of OMI formaldehyde observations

    Directory of Open Access Journals (Sweden)

    M. Bauwens

    2016-08-01

    Full Text Available As formaldehyde (HCHO is a high-yield product in the oxidation of most volatile organic compounds (VOCs emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The long record of space-based HCHO column observations from the Ozone Monitoring Instrument (OMI is used to infer emission flux estimates from pyrogenic and biogenic volatile organic compounds (VOCs on the global scale over 2005–2013. This is realized through the method of source inverse modeling, which consists in the optimization of emissions in a chemistry-transport model (CTM in order to minimize the discrepancy between the observed and modeled HCHO columns. The top–down fluxes are derived in the global CTM IMAGESv2 by an iterative minimization algorithm based on the full adjoint of IMAGESv2, starting from a priori emission estimates provided by the newly released GFED4s (Global Fire Emission Database, version 4s inventory for fires, and by the MEGAN-MOHYCAN inventory for isoprene emissions. The top–down fluxes are compared to two independent inventories for fire (GFAS and FINNv1.5 and isoprene emissions (MEGAN-MACC and GUESS-ES. The inversion indicates a moderate decrease (ca. 20 % in the average annual global fire and isoprene emissions, from 2028 Tg C in the a priori to 1653 Tg C for burned biomass, and from 343 to 272 Tg for isoprene fluxes. Those estimates are acknowledged to depend on the accuracy of formaldehyde data, as well as on the assumed fire emission factors and the oxidation mechanisms leading to HCHO production. Strongly decreased top–down fire fluxes (30–50 % are inferred in the peak fire season in Africa and during years with strong a priori fluxes associated with forest fires in Amazonia (in 2005, 2007, and 2010, bushfires in Australia (in 2006 and 2011, and peat burning in Indonesia (in 2006 and 2009, whereas

  18. Design and implementation of the NPOI database and website

    Science.gov (United States)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  19. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    Science.gov (United States)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear

  20. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.

  1. The optical, infrared and radio properties of extragalactic sources observed by SDSS, 2mass and first surveys

    International Nuclear Information System (INIS)

    Z. Ivezic et al.

    2002-01-01

    We positionally match sources observed by the Sloan Digital Sky Survey (SDSS), the Two Micron All Sky Survey (2MASS), and the Faint Images of the Radio Sky at Twenty-cm (FIRST) survey. Practically all 2MASS sources are matched to an SDSS source within 2 arcsec; ∼ 11% of them are optically resolved galaxies and the rest are dominated by stars. About 1/3 of FIRST sources are matched to an SDSS source within 2 arcsec; ∼ 80% of these are galaxies and the rest are dominated by quasars. Based on these results, we project that by the completion of these surveys the matched samples will include about 10 7 and 10 6 galaxies observed by both SDSS and 2MASS, and about 250,000 galaxies and 50,000 quasars observed by both SDSS and FIRST. Here we present a preliminary analysis of the optical, infrared and radio properties for the extragalactic sources from the matched samples. In particular, we find that the fraction of quasars with stellar colors missed by the SDSS spectroscopic survey is probably not larger than ∼ 10%, and that the optical colors of radio-loud quasars are ∼ 0.05 mag. redder (with 4σ significance) than the colors of radio-quiet quasars

  2. Metadata database and data analysis software for the ground-based upper atmospheric data developed by the IUGONET project

    Science.gov (United States)

    Hayashi, H.; Tanaka, Y.; Hori, T.; Koyama, Y.; Shinbori, A.; Abe, S.; Kagitani, M.; Kouno, T.; Yoshida, D.; Ueno, S.; Kaneda, N.; Yoneda, M.; Tadokoro, H.; Motoba, T.; Umemura, N.; Iugonet Project Team

    2011-12-01

    The Inter-university Upper atmosphere Global Observation NETwork (IUGONET) is a Japanese inter-university project by the National Institute of Polar Research (NIPR), Tohoku University, Nagoya University, Kyoto University, and Kyushu University to build a database of metadata for ground-based observations of the upper atmosphere. The IUGONET institutes/universities have been collecting various types of data by radars, magnetometers, photometers, radio telescopes, helioscopes, etc. at various locations all over the world and at various altitude layers from the Earth's surface to the Sun. The metadata database will be of great help to researchers in efficiently finding and obtaining these observational data spread over the institutes/universities. This should also facilitate synthetic analysis of multi-disciplinary data, which will lead to new types of research in the upper atmosphere. The project has also been developing a software to help researchers download, visualize, and analyze the data provided from the IUGONET institutes/universities. The metadata database system is built on the platform of DSpace, which is an open source software for digital repositories. The data analysis software is written in the IDL language with the TDAS (THEMIS Data Analysis Software suite) library. These products have been just released for beta-testing.

  3. On an Allan variance approach to classify VLBI radio-sources on the basis of their astrometric stability

    Science.gov (United States)

    Gattano, C.; Lambert, S.; Bizouard, C.

    2017-12-01

    In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.

  4. High energy nuclear database: a test-bed for nuclear data information technology

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.; Beck, B.; Pruet, J.; Vogt, R.

    2008-01-01

    We describe the development of an on-line high-energy heavy-ion experimental database. When completed, the database will be searchable and cross-indexed with relevant publications, including published detector descriptions. While this effort is relatively new, it will eventually contain all published data from older heavy-ion programs as well as published data from current and future facilities. These data include all measured observables in proton-proton, proton-nucleus and nucleus-nucleus collisions. Once in general use, this database will have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models for a broad range of experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion, target and source development for upcoming facilities such as the International Linear Collider and homeland security. This database is part of a larger proposal that includes the production of periodic data evaluations and topical reviews. These reviews would provide an alternative and impartial mechanism to resolve discrepancies between published data from rival experiments and between theory and experiment. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This project serves as a test-bed for the further development of an object-oriented nuclear data format and database system. By using 'off-the-shelf' software tools and techniques, the system is simple, robust, and extensible. Eventually we envision a 'Grand Unified Nuclear Format' encapsulating data types used in the ENSDF, Endf/B, EXFOR, NSR and other formats, including processed data formats. (authors)

  5. Deployment and Evaluation of an Observations Data Model

    Science.gov (United States)

    Horsburgh, J. S.; Tarboton, D. G.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.

    2007-12-01

    Environmental observations are fundamental to hydrology and water resources, and the way these data are organized and manipulated either enables or inhibits the analyses that can be performed. The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. This includes an Observations Data Model (ODM) that provides a new and consistent format for the storage and retrieval of environmental observations in a relational database designed to facilitate integrated analysis of large datasets collected by multiple investigators. Within this data model, observations are stored with sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. The design is based upon a relational database model that exposes each single observation as a record, taking advantage of the capability in relational database systems for querying based upon data values and enabling cross dimension data retrieval and analysis. This data model has been deployed, as part of the HIS Server, at the WATERS Network test bed observatories across the U.S where it serves as a repository for real time data in the observatory information system. The ODM holds the data that is then made available to investigators and the public through web services and the Data Access System for Hydrology (DASH) map based interface. In the WATERS Network test bed settings the ODM has been used to ingest, analyze and publish data from a variety of sources and disciplines. This paper will present an evaluation of the effectiveness of this initial deployment and the revisions that are being instituted to address shortcomings. The ODM represents a new, systematic way for hydrologists, scientists, and engineers to organize and share their data and thereby facilitate a fuller integrated understanding of water resources based on

  6. Historical tsunami database for France and its overseas territories

    Directory of Open Access Journals (Sweden)

    J. Lambert

    2011-04-01

    Full Text Available A search and analysis of a large number of historical documents has made it possible: (i to discover so-far unknown tsunamis that have hit the French coasts during the last centuries, and (ii conversely, to disprove the tsunami nature of several events referred to in recent catalogues. This information has been structured into a database and also made available as a website (http://www.tsunamis.fr that is accessible in French, English and Spanish. So far 60 genuine ("true" tsunamis have been described (with their dates, causes, oceans/seas, places observed, number of waves, flood and ebb distances, run-up, and intensities and referenced against contemporary sources. Digitized documents are accessible online. In addition, so as to avoid confusion, tsunamis revealed as "false" or "doubtful" have been compiled into a second catalogue.

    Both the database and the website are updated annually corresponding to the state of knowledge, so as to take into account newly discovered historical references and the occurrence of new tsunamis on the coasts of France and many of its overseas territories: Guadeloupe, Martinique, French Guiana, New Caledonia, Réunion, and Mayotte.

  7. Database description for the biosphere code BIOMOD

    International Nuclear Information System (INIS)

    Kane, P.; Thorne, M.C.; Coughtrey, P.J.

    1983-03-01

    The development of a biosphere model for use in comparative radiological assessments of UK low and intermediate level waste repositories is discussed. The nature, content and sources of data contained in the four files that comprise the database for the biosphere code BIOMOD are described. (author)

  8. SEVEN-YEAR WILKINSON MICROWAVE ANISOTROPY PROBE (WMAP ) OBSERVATIONS: PLANETS AND CELESTIAL CALIBRATION SOURCES

    International Nuclear Information System (INIS)

    Weiland, J. L.; Odegard, N.; Hill, R. S.; Greason, M. R.; Wollack, E.; Hinshaw, G.; Kogut, A.; Jarosik, N.; Page, L.; Bennett, C. L.; Gold, B.; Larson, D.; Dunkley, J.; Halpern, M.; Komatsu, E.; Limon, M.; Meyer, S. S.; Nolta, M. R.; Smith, K. M.; Spergel, D. N.

    2011-01-01

    We present WMAP seven-year observations of bright sources which are often used as calibrators at microwave frequencies. Ten objects are studied in five frequency bands (23-94 GHz): the outer planets (Mars, Jupiter, Saturn, Uranus, and Neptune) and five fixed celestial sources (Cas A, Tau A, Cyg A, 3C274, and 3C58). The seven-year analysis of Jupiter provides temperatures which are within 1σ of the previously published WMAP five-year values, with slightly tighter constraints on variability with orbital phase (0.2% ± 0.4%), and limits (but no detections) on linear polarization. Observed temperatures for both Mars and Saturn vary significantly with viewing geometry. Scaling factors are provided which, when multiplied by the Wright Mars thermal model predictions at 350 μm, reproduce WMAP seasonally averaged observations of Mars within ∼2%. An empirical model is described which fits brightness variations of Saturn due to geometrical effects and can be used to predict the WMAP observations to within 3%. Seven-year mean temperatures for Uranus and Neptune are also tabulated. Uncertainties in Uranus temperatures are 3%-4% in the 41, 61, and 94 GHz bands; the smallest uncertainty for Neptune is 8% for the 94 GHz band. Intriguingly, the spectrum of Uranus appears to show a dip at ∼30 GHz of unidentified origin, although the feature is not of high statistical significance. Flux densities for the five selected fixed celestial sources are derived from the seven-year WMAP sky maps and are tabulated for Stokes I, Q, and U, along with polarization fraction and position angle. Fractional uncertainties for the Stokes I fluxes are typically 1% to 3%. Source variability over the seven-year baseline is also estimated. Significant secular decrease is seen for Cas A and Tau A: our results are consistent with a frequency-independent decrease of about 0.53% per year for Cas A and 0.22% per year for Tau A. We present WMAP polarization data with uncertainties of a few percent for Tau

  9. A Decade of Combination Antiretroviral Treatment in Asia: The TREAT Asia HIV Observational Database Cohort.

    Science.gov (United States)

    2016-08-01

    Asian countries have seen the expansion of combination antiretroviral therapy (cART) over the past decade. The TREAT Asia HIV Observational Database (TAHOD) was established in 2003 comprising 23 urban referral sites in 13 countries across the region. We examined trends in treatment outcomes in patients who initiated cART between 2003 and 2013. Time of cART initiation was grouped into three periods: 2003-2005, 2006-2009, and 2010-2013. We analyzed trends in undetectable viral load (VL; defined as VL treatment outcomes, with older age and higher CD4 counts being associated with undetectable VL. Survival and VL response on cART have improved over the past decade in TAHOD, although CD4 count at cART initiation remained low. Greater effort should be made to facilitate earlier HIV diagnosis and linkage to care and treatment, to achieve greater improvements in treatment outcomes.

  10. Database for earthquake strong motion studies in Italy

    Science.gov (United States)

    Scasserra, G.; Stewart, J.P.; Kayen, R.E.; Lanzo, G.

    2009-01-01

    We describe an Italian database of strong ground motion recordings and databanks delineating conditions at the instrument sites and characteristics of the seismic sources. The strong motion database consists of 247 corrected recordings from 89 earthquakes and 101 recording stations. Uncorrected recordings were drawn from public web sites and processed on a record-by-record basis using a procedure utilized in the Next-Generation Attenuation (NGA) project to remove instrument resonances, minimize noise effects through low- and high-pass filtering, and baseline correction. The number of available uncorrected recordings was reduced by 52% (mostly because of s-triggers) to arrive at the 247 recordings in the database. The site databank includes for every recording site the surface geology, a measurement or estimate of average shear wave velocity in the upper 30 m (Vs30), and information on instrument housing. Of the 89 sites, 39 have on-site velocity measurements (17 of which were performed as part of this study using SASW techniques). For remaining sites, we estimate Vs30 based on measurements on similar geologic conditions where available. Where no local velocity measurements are available, correlations with surface geology are used. Source parameters are drawn from databanks maintained (and recently updated) by Istituto Nazionale di Geofisica e Vulcanologia and include hypocenter location and magnitude for small events (M< ??? 5.5) and finite source parameters for larger events. ?? 2009 A.S. Elnashai & N.N. Ambraseys.

  11. The Einstein Observatory stellar X-ray database

    International Nuclear Information System (INIS)

    Harnden, F.R. Jr.; Sciortino, S.; Micela, G.; Maggio, A.; Schmitt, J.H.M.M.

    1990-01-01

    We present the motivation for and methodology followed in constructing the Einstein Observatory Stellar X-ray Database from a uniform analysis of nearly 4000 Imaging Proportional Counter fields obtained during the life of this mission. This project has been implemented using the INGRES database system, so that statistical analyses of the properties of detected X-ray sources are relatively easily and flexibly accomplished. Some illustrative examples will furnish a general view both of the kind and amount of the archived information and of the statistical approach used in analyzing the global properties of the data. (author)

  12. Design research of uranium mine borehole database

    International Nuclear Information System (INIS)

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  13. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  14. Features of TMR for a Successful Clinical and Research Database

    OpenAIRE

    Pryor, David B.; Stead, William W.; Hammond, W. Edward; Califf, Robert M.; Rosati, Robert A.

    1982-01-01

    A database can be used for clinical practice and for research. The design of the database is important if both uses are to succeed. A clinical database must be efficient and flexible. A research database requires consistent observations recorded in a format which permits complete recall of the experience. In addition, the database should be designed to distinguish between missing data and negative responses, and to minimize transcription errors during the recording process.

  15. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  16. Sources and methods to reconstruct past masting patterns in European oak species.

    Science.gov (United States)

    Szabó, Péter

    2012-01-01

    The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.

  17. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  18. Model/observational data cross analysis in planetary plasma sciences with IMPEx

    Science.gov (United States)

    Genot, V. N.; Khodachenko, M.; Kallio, E. J.; Al-Ubaidi, T.; Alexeev, I. I.; Gangloff, M.; Bourrel, N.; andre, N.; Modolo, R.; Hess, S.; Topf, F.; Perez-Suarez, D.; Belenkaya, E. S.; Kalegaev, V. V.; Hakkinen, L. V.

    2013-12-01

    This presentation details how the FP7 IMPEx (http://impex-fp7.oeaw.ac.at/) infrastructure helps scientists in inter-comparing observational and model data in planetary plasma sciences. Within the project, data originate from multiple sources : large observational databases (CDAWeb, AMDA at CDPP, CLWeb at IRAP), simulation databases for hybrid and MHD codes (FMI, LATMOS), planetary magnetic field models database and online services (SINP). To navigate in this large data ensemble, IMPEx offers a distributed framework in which these data may be visualized, analyzed, and shared thanks to a set of interoperable tools (AMDA, 3DView, CLWeb). A simulation data model, based on SPASE, has been designed to ease data exchange within the infrastructure. On the communication point of view, the Virtual Observatory paradigm is followed and the architecture is based on web services and the IVOA protocol SAMP. These choices enabled a high level versatility with the goal to allow other model or data providers to distribute their own resources via the IMPEx infrastructure. A detailed use case based on Mars data and hybrid models will be proposed showing how the tools may be operated synchronously to manipulate heterogeneous data sets. Facilitating the analysis of the future MAVEN observations is one possible application of the IMPEx infrastructure.

  19. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  20. Abstract databases in nuclear medicine; New database for articles not indexed in PubMed

    International Nuclear Information System (INIS)

    Ugrinska, A.; Mustafa, B.

    2004-01-01

    large number of abstracts and servicing a larger user-community. The database is placed at the URL: http://www.nucmediex.net. We hope that nuclear medicine professionals will contribute building this database and that it will be valuable source of information. (author)

  1. Drug interaction databases in medical literature

    DEFF Research Database (Denmark)

    Kongsholm, Gertrud Gansmo; Nielsen, Anna Katrine Toft; Damkier, Per

    2015-01-01

    PURPOSE: It is well documented that drug-drug interaction databases (DIDs) differ substantially with respect to classification of drug-drug interactions (DDIs). The aim of this study was to study online available transparency of ownership, funding, information, classifications, staff training...... available transparency of ownership, funding, information, classifications, staff training, and underlying documentation varies substantially among various DIDs. Open access DIDs had a statistically lower score on parameters assessed....... and the three most commonly used subscription DIDs in the medical literature. The following parameters were assessed for each of the databases: Ownership, classification of interactions, primary information sources, and staff qualification. We compared the overall proportion of yes/no answers from open access...

  2. World-wide ocean optics database WOOD (NODC Accession 0092528)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — WOOD was developed to be a comprehensive publicly-available oceanographic bio-optical database providing global coverage. It includes nearly 250 major data sources...

  3. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  4. Direct Power Control for Three-Phase Two-Level Voltage-Source Rectifiers Based on Extended-State Observation

    DEFF Research Database (Denmark)

    Song, Zhanfeng; Tian, Yanjun; Yan, Zhuo

    2016-01-01

    This paper proposed a direct power control strategy for three-phase two-level voltage-source rectifiers based on extended-state observation. Active and reactive powers are directly regulated in the stationary reference frame. Similar to the family of predictive controllers whose inherent characte......This paper proposed a direct power control strategy for three-phase two-level voltage-source rectifiers based on extended-state observation. Active and reactive powers are directly regulated in the stationary reference frame. Similar to the family of predictive controllers whose inherent...

  5. MonetDB/SQL Meets SkyServer: the Challenges of a Scientific Database.

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); N.J. Nes (Niels); R.A. Goncalves (Romulo); M.L. Kersten (Martin)

    2007-01-01

    textabstractThis paper presents our experiences in porting the Sloan Digital Sky Survey(SDSS)/ SkyServer to the state-of-the-art open source database system MonetDB/SQL. SDSS acts as a well-documented benchmark for scientific database management. We have achieved a fully functional prototype for the

  6. Review on recent progress in observations, source identifications and countermeasures of PM2.5.

    Science.gov (United States)

    Liang, Chun-Sheng; Duan, Feng-Kui; He, Ke-Bin; Ma, Yong-Liang

    2016-01-01

    Recently, PM2.5 (atmospheric fine particulate matter with aerodynamic diameter ≤ 2.5 μm) have received so much attention that the observations, source appointment and countermeasures of it have been widely studied due to its harmful impacts on visibility, mood (mental health), physical health, traffic safety, construction, economy and nature, as well as its complex interaction with climate. A review on the PM2.5 related research is necessary. We start with summary of chemical composition and characteristics of PM2.5 that contains both macro and micro observation results and analysis, wherein the temporal variability of concentrations of PM2.5 and major components in many recent reports is embraced. This is closely followed by an overview of source appointment, including the composition and sources of PM2.5 in different countries in the six inhabitable continents based on the best available results. Besides summarizing PM2.5 pollution countermeasures by policy, planning, technology and ideology, the World Air Day is proposed to be established to inspire and promote the crucial social action in energy-saving and emission-reduction. Some updated knowledge of the important topics (such as formation and evolution mechanisms of hazes, secondary aerosols, aerosol mass spectrometer, organic tracers, radiocarbon, emissions, solutions for air pollution problems, etc.) is also included in the present review by logically synthesizing the studies. In addition, the key research challenges and future directions are put forward. Despite our efforts, our understanding of the recent reported observations, source identifications and countermeasures of PM2.5 is limited, and subsequent efforts both of the authors and readers are needed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. AutoLabDB: a substantial open source database schema to support a high-throughput automated laboratory.

    Science.gov (United States)

    Sparkes, Andrew; Clare, Amanda

    2012-05-15

    Modern automated laboratories need substantial data management solutions to both store and make accessible the details of the experiments they perform. To be useful, a modern Laboratory Information Management System (LIMS) should be flexible and easily extensible to support evolving laboratory requirements, and should be based on the solid foundations of a robust, well-designed database. We have developed such a database schema to support an automated laboratory that performs experiments in systems biology and high-throughput screening. We describe the design of the database schema (AutoLabDB), detailing the main features and describing why we believe it will be relevant to LIMS manufacturers or custom builders. This database has been developed to support two large automated Robot Scientist systems over the last 5 years, where it has been used as the basis of an LIMS that helps to manage both the laboratory and all the experiment data produced.

  8. EFFECTIVELY SEARCHING SPECIMEN AND OBSERVATION DATA WITH TOQE, THE THESAURUS OPTIMIZED QUERY EXPANDER

    Directory of Open Access Journals (Sweden)

    Anton Güntsch

    2009-09-01

    Full Text Available Today’s specimen and observation data portals lack a flexible mechanism, able to link up thesaurus-enabled data sources such as taxonomic checklist databases and expand user queries to related terms, significantly enhancing result sets. The TOQE system (Thesaurus Optimized Query Expander is a REST-like XML web-service implemented in Python and designed for this purpose. Acting as an interface between portals and thesauri, TOQE allows the implementation of specialized portal systems with a set of thesauri supporting its specific focus. It is both easy to use for portal programmers and easy to configure for thesaurus database holders who want to expose their system as a service for query expansions. Currently, TOQE is used in four specimen and observation data portals. The documentation is available from http://search.biocase.org/toqe/.

  9. Construction of the Database for Pulsating Variable Stars

    Science.gov (United States)

    Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei

    2012-01-01

    A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.

  10. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  11. European Vegetation Archive (EVA): an integrated database of European vegetation plots

    DEFF Research Database (Denmark)

    Chytrý, M; Hennekens, S M; Jiménez-Alfaro, B

    2015-01-01

    vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management......The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional...... data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database....

  12. Environmental Data Sources

    Data.gov (United States)

    Kansas Data Access and Support Center — This database includes gauging stations, climatic data centers, and storet sites. The accuracy of the locations is dependent on the source data for each of the...

  13. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  14. The Ruby UCSC API: accessing the UCSC genome database using Ruby.

    Science.gov (United States)

    Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro

    2012-09-21

    The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  15. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Science.gov (United States)

    2012-01-01

    Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508

  16. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Directory of Open Access Journals (Sweden)

    Mishima Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The University of California, Santa Cruz (UCSC genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser and several means for programmatic queries. A simple application programming interface (API in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby. Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  17. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  18. Earthquake-induced ground failures in Italy from a reviewed database

    Science.gov (United States)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2014-04-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground changes triggered by earthquakes of Mercalli epicentral intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (http://www.ceri.uniroma1.it/cn/gis.jsp ) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the Sapienza University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  19. [Exploration and construction of the full-text database of acupuncture literature in the Republic of China].

    Science.gov (United States)

    Fei, Lin; Zhao, Jing; Leng, Jiahao; Zhang, Shujian

    2017-10-12

    The ALIPORC full-text database is targeted at a specific full-text database of acupuncture literature in the Republic of China. Starting in 2015, till now, the database has been getting completed, focusing on books relevant with acupuncture, articles and advertising documents, accomplished or published in the Republic of China. The construction of this database aims to achieve the source sharing of acupuncture medical literature in the Republic of China through the retrieval approaches to diversity and accurate content presentation, contributes to the exchange of scholars, reduces the paper damage caused by paging and simplify the retrieval of the rare literature. The writers have made the explanation of the database in light of sources, characteristics and current situation of construction; and have discussed on improving the efficiency and integrity of the database and deepening the development of acupuncture literature in the Republic of China.

  20. 2008 Availability and Utilization of Electronic Information Databases ...

    African Journals Online (AJOL)

    Gbaje E.S

    electronic information databases include; research work, to update knowledge in their field of interest and Current awareness. ... be read by a computer device. CD ROMs are ... business and government innovation. Its ... technologies, ideas and management practices ..... sources of information and storage devices bring.

  1. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  2. Reactome graph database: Efficient access to complex pathway data

    Science.gov (United States)

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  3. Reactome graph database: Efficient access to complex pathway data.

    Directory of Open Access Journals (Sweden)

    Antonio Fabregat

    2018-01-01

    Full Text Available Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j as well as the new ContentService (REST API that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  4. Reactome graph database: Efficient access to complex pathway data.

    Science.gov (United States)

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  5. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.

  6. Primary Numbers Database for ATLAS Detector Description Parameters

    CERN Document Server

    Vaniachine, A; Malon, D; Nevski, P; Wenaus, T

    2003-01-01

    We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides addition...

  7. A European Flood Database: facilitating comprehensive flood research beyond administrative boundaries

    Directory of Open Access Journals (Sweden)

    J. Hall

    2015-06-01

    Full Text Available The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.

  8. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  9. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  10. The eNanoMapper database for nanomaterial safety information.

    Science.gov (United States)

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly

  11. Healthcare databases in Europe for studying medicine use and safety during pregnancy

    OpenAIRE

    Charlton, Rachel A.; Neville, Amanda J.; Jordan, Sue; Pierini, Anna; Damase-Michel, Christine; Klungsøyr, Kari; Andersen, Anne-Marie Nybo; Hansen, Anne Vinkel; Gini, Rosa; Bos, Jens H.J.; Puccini, Aurora; Hurault-Delarue, Caroline; Brooks, Caroline J.; De Jong-van den Berg, Lolkje T.V.; de Vries, Corinne S.

    2014-01-01

    Purpose The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. Methods A sample of electronic healthcare databases that captured pregnancies and prescription data was selected on the basis of contacts within the EUROCAT network. For each participating database, a data...

  12. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  13. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  14. Development a GIS Snowstorm Database

    Science.gov (United States)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  15. High energy X-ray observations of COS-B gamma-ray sources from OSO-8

    Science.gov (United States)

    Dolan, J. F.; Crannell, C. J.; Dennis, B. R.; Frost, K. J.; Orwig, L. E.; Caraveo, P. A.

    1985-01-01

    During the three years between satellite launch in June 1975 and turn-off in October 1978, the high energy X-ray spectrometer on board OSO-8 observed nearly all of the COS-B gamma-ray source positions given in the 2CG catalog (Swanenburg et al., 1981). An X-ray source was detected at energies above 20 keV at the 6-sigma level of significance in the gamma-ray error box containing 2CG342 - 02 and at the 3-sigma level of significance in the error boxes containing 2CG065 + 00, 2CG195 + 04, and 2CG311 - 01. No definite association between the X-ray and gamma-ray sources can be made from these data alone. Upper limits are given for the 2CG sources from which no X-ray flux was detected above 20 keV.

  16. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  17. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  18. InverPep: A database of invertebrate antimicrobial peptides.

    Science.gov (United States)

    Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio

    2017-03-01

    The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.

  19. Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Planets and Celestial Calibration Sources

    Science.gov (United States)

    Weiland, J. L.; Odegard, N.; Hill, R. S.; Wollack, E.; Hinshaw, G.; Greason, M. R.; Jarosik, N.; Page, L.; Bennett, C. L.; Dunkley, J.; Gold, B.; Halpern, M.; Kogut, A.; Komatsu, E.; Larson, D.; Limon, M.; Meyer, S. S.; Nolta, M. R.; Smith, K. M.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.

    2011-02-01

    We present WMAP seven-year observations of bright sources which are often used as calibrators at microwave frequencies. Ten objects are studied in five frequency bands (23-94 GHz): the outer planets (Mars, Jupiter, Saturn, Uranus, and Neptune) and five fixed celestial sources (Cas A, Tau A, Cyg A, 3C274, and 3C58). The seven-year analysis of Jupiter provides temperatures which are within 1σ of the previously published WMAP five-year values, with slightly tighter constraints on variability with orbital phase (0.2% ± 0.4%), and limits (but no detections) on linear polarization. Observed temperatures for both Mars and Saturn vary significantly with viewing geometry. Scaling factors are provided which, when multiplied by the Wright Mars thermal model predictions at 350 μm, reproduce WMAP seasonally averaged observations of Mars within ~2%. An empirical model is described which fits brightness variations of Saturn due to geometrical effects and can be used to predict the WMAP observations to within 3%. Seven-year mean temperatures for Uranus and Neptune are also tabulated. Uncertainties in Uranus temperatures are 3%-4% in the 41, 61, and 94 GHz bands; the smallest uncertainty for Neptune is 8% for the 94 GHz band. Intriguingly, the spectrum of Uranus appears to show a dip at ~30 GHz of unidentified origin, although the feature is not of high statistical significance. Flux densities for the five selected fixed celestial sources are derived from the seven-year WMAP sky maps and are tabulated for Stokes I, Q, and U, along with polarization fraction and position angle. Fractional uncertainties for the Stokes I fluxes are typically 1% to 3%. Source variability over the seven-year baseline is also estimated. Significant secular decrease is seen for Cas A and Tau A: our results are consistent with a frequency-independent decrease of about 0.53% per year for Cas A and 0.22% per year for Tau A. We present WMAP polarization data with uncertainties of a few percent for Tau A

  20. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  1. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  2. An international database of radionuclide concentration ratios for wildlife: development and uses

    International Nuclear Information System (INIS)

    Copplestone, D.; Beresford, N.A.; Brown, J.E.; Yankovich, T.

    2013-01-01

    A key element of most systems for assessing the impact of radionuclides on the environment is a means to estimate the transfer of radionuclides to organisms. To facilitate this, an international wildlife transfer database has been developed to provide an online, searchable compilation of transfer parameters in the form of equilibrium-based whole-organism to media concentration ratios. This paper describes the derivation of the wildlife transfer database, the key data sources it contains and highlights the applications for the data. -- Highlights: • An online database containing wildlife radionuclide transfer parameters is described. • Database underpins recent ICRP and IAEA data wildlife transfer compilations. • Database contains equilibrium based whole organism to media concentration ratios

  3. Linking optical and infrared observations with gravitational wave sources through transient variability

    International Nuclear Information System (INIS)

    Stubbs, C W

    2008-01-01

    Optical and infrared observations have thus far detected more celestial cataclysms than have been seen in gravity waves (GW). This argues that we should search for gravity wave signatures that correspond to transient variables seen at optical wavelengths, at precisely known positions. There is an unknown time delay between the optical and gravitational transient, but knowing the source location precisely specifies the corresponding time delays across the gravitational antenna network as a function of the GW-to-optical arrival time difference. Optical searches should detect virtually all supernovae that are plausible gravitational radiation sources. The transient optical signature expected from merging compact objects is not as well understood, but there are good reasons to expect detectable transient optical/IR emission from most of these sources as well. The next generation of deep wide-field surveys (for example PanSTARRS and LSST) will be sensitive to subtle optical variability, but we need to fill the 'blind spots' that exist in the galactic plane, and for optically bright transient sources. In particular, a galactic plane variability survey at λ∼ 2 μm seems worthwhile. Science would benefit from closer coordination between the various optical survey projects and the gravity wave community

  4. Observations of the unidentified gamma-ray source TeV J2032+4130 by Veritas

    Energy Technology Data Exchange (ETDEWEB)

    Aliu, E.; Errando, M. [Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027 (United States); Aune, T. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Behera, B.; Chen, X.; Federici, S. [DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Beilicke, M.; Buckley, J. H.; Bugaev, V. [Department of Physics, Washington University, St. Louis, MO 63130 (United States); Benbow, W.; Cerruti, M. [Fred Lawrence Whipple Observatory, Harvard-Smithsonian Center for Astrophysics, Amado, AZ 85645 (United States); Berger, K. [Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716 (United States); Bird, R. [School of Physics, University College Dublin, Belfield, Dublin 4 (Ireland); Cardenzana, J. V. [Department of Physics and Astronomy, Iowa State University, Ames, IA 50011 (United States); Ciupik, L. [Astronomy Department, Adler Planetarium and Astronomy Museum, Chicago, IL 60605 (United States); Connolly, M. P. [School of Physics, National University of Ireland Galway, University Road, Galway (Ireland); Cui, W. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Duke, C. [Department of Physics, Grinnell College, Grinnell, IA 50112-1690 (United States); Dumm, J. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Falcone, A., E-mail: pratik.majumdar@saha.ac.in, E-mail: gareth.hughes@desy.de [Department of Astronomy and Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802 (United States); and others

    2014-03-01

    TeV J2032+4130 was the first unidentified source discovered at very high energies (VHEs; E > 100 GeV), with no obvious counterpart in any other wavelength. It is also the first extended source to be observed in VHE gamma rays. Following its discovery, intensive observational campaigns have been carried out in all wavelengths in order to understand the nature of the object, which have met with limited success. We report here on a deep observation of TeV J2032+4130 based on 48.2 hr of data taken from 2009 to 2012 by the Very Energetic Radiation Imaging Telescope Array System experiment. The source is detected at 8.7 standard deviations (σ) and is found to be extended and asymmetric with a width of 9.'5 ± 1.'2 along the major axis and 4.'0 ± 0.'5 along the minor axis. The spectrum is well described by a differential power law with an index of 2.10 ± 0.14{sub stat} ± 0.21{sub sys} and a normalization of (9.5 ± 1.6{sub stat} ± 2.2{sub sys}) × 10{sup –13} TeV{sup –1} cm{sup –2} s{sup –1} at 1 TeV. We interpret these results in the context of multiwavelength scenarios which particularly favor the pulsar wind nebula interpretation.

  5. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2017-01-01

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data

  6. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  7. The National Solar Radiation Database (NSRDB)

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, Manajit; Habte, Aron; Lopez, Anthony; Xie, Yu; Molling, Christine; Gueymard, Christian

    2017-03-13

    This presentation provides a high-level overview of the National Solar Radiation Database (NSRDB), including sensing, measurement and forecasting, and discusses observations that are needed for research and product development.

  8. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  9. The Footprint Database and Web Services of the Herschel Space Observatory

    Science.gov (United States)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data

  10. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  11. SOURCE REGIONS OF THE TYPE II RADIO BURST OBSERVED DURING A CME–CME INTERACTION ON 2013 MAY 22

    International Nuclear Information System (INIS)

    Mäkelä, P.; Reiner, M. J.; Akiyama, S.; Gopalswamy, N.; Krupar, V.

    2016-01-01

    We report on our study of radio source regions during the type II radio burst on 2013 May 22 based on direction-finding analysis of the Wind /WAVES and STEREO /WAVES (SWAVES) radio observations at decameter–hectometric wavelengths. The type II emission showed an enhancement that coincided with the interaction of two coronal mass ejections (CMEs) launched in sequence along closely spaced trajectories. The triangulation of the SWAVES source directions posited the ecliptic projections of the radio sources near the line connecting the Sun and the STEREO-A spacecraft. The WAVES and SWAVES source directions revealed shifts in the latitude of the radio source, indicating that the spatial location of the dominant source of the type II emission varies during the CME–CME interaction. The WAVES source directions close to 1 MHz frequencies matched the location of the leading edge of the primary CME seen in the images of the LASCO/C3 coronagraph. This correspondence of spatial locations at both wavelengths confirms that the CME–CME interaction region is the source of the type II enhancement. Comparison of radio and white-light observations also showed that at lower frequencies scattering significantly affects radio wave propagation.

  12. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  13. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  14. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  15. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system

  16. Evolution and applications of plant pathway resources and databases

    DEFF Research Database (Denmark)

    Sucaet, Yves; Deva, Taru

    2011-01-01

    Plants are important sources of food and plant products are essential for modern human life. Plants are increasingly gaining importance as drug and fuel resources, bioremediation tools and as tools for recombinant technology. Considering these applications, database infrastructure for plant model...... systems deserves much more attention. Study of plant biological pathways, the interconnection between these pathways and plant systems biology on the whole has in general lagged behind human systems biology. In this article we review plant pathway databases and the resources that are currently available...

  17. Report from the 2nd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2009-03-01

    Full Text Available The complexity and sophistication of large scale analytics in science and industry have advanced dramatically in recent years. Analysts are struggling to use complex techniques such as time series analysis and classification algorithms because their familiar, powerful tools are not scalable and cannot effectively use scalable database systems. The 2nd Extremely Large Databases (XLDB workshop was organized to understand these issues, examine their implications, and brainstorm possible solutions. The design of a new open source science database, SciDB that emerged from the first workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  18. 5S ribosomal RNA database Y2K.

    Science.gov (United States)

    Szymanski, M; Barciszewska, M Z; Barciszewski, J; Erdmann, V A

    2000-01-01

    This paper presents the updated version (Y2K) of the database of ribosomal 5S ribonucleic acids (5S rRNA) and their genes (5S rDNA), http://rose.man/poznan.pl/5SData/index.html. This edition of the database contains 1985primary structures of 5S rRNA and 5S rDNA. They include 60 archaebacterial, 470 eubacterial, 63 plastid, nine mitochondrial and 1383 eukaryotic sequences. The nucleotide sequences of the 5S rRNAs or 5S rDNAs are divided according to the taxonomic position of the source organisms.

  19. Some characteristics of atmospheric gravity waves observed by radio-interferometry

    Directory of Open Access Journals (Sweden)

    Claude Mercier

    Full Text Available Observations of atmospheric acoustic-gravity waves (AGWs are considered through their effect on the horizontal gradient G of the slant total electron content (slant TEC, which can be directly obtained from two-dimensional radio-interferometric observations of cosmic radio-sources with the Nançay radioheligraph (2.2°E, 47.3°N. Azimuths of propagation can be deduced (modulo 180°. The total database amounts to about 800 h of observations at various elevations, local time and seasons. The main results are:

    a AGWs are partially directive, confirming our previous results.

    b The propagation azimuths considered globally are widely scattered with a preference towards the south.

    c They show a bimodal time distribution with preferential directions towards the SE during daytime and towards the SW during night-time (rather than a clockwise rotation as reported by previous authors.

    d The periods are scattered but are larger during night-time than during daytime by about 60%.

    e The effects observed with the solar radio-sources are significantly stronger than with other radio-sources (particularly at higher elevations, showing the role of the geometry in line of sight-integrated observations.

  20. Data analysis and pattern recognition in multiple databases

    CERN Document Server

    Adhikari, Animesh; Pedrycz, Witold

    2014-01-01

    Pattern recognition in data is a well known classical problem that falls under the ambit of data analysis. As we need to handle different data, the nature of patterns, their recognition and the types of data analyses are bound to change. Since the number of data collection channels increases in the recent time and becomes more diversified, many real-world data mining tasks can easily acquire multiple databases from various sources. In these cases, data mining becomes more challenging for several essential reasons. We may encounter sensitive data originating from different sources - those cannot be amalgamated. Even if we are allowed to place different data together, we are certainly not able to analyse them when local identities of patterns are required to be retained. Thus, pattern recognition in multiple databases gives rise to a suite of new, challenging problems different from those encountered before. Association rule mining, global pattern discovery, and mining patterns of select items provide different...

  1. Advanced SPARQL querying in small molecule databases.

    Science.gov (United States)

    Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří

    2016-01-01

    In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.

  2. UbiProt: a database of ubiquitylated proteins

    Directory of Open Access Journals (Sweden)

    Kondratieva Ekaterina V

    2007-04-01

    Full Text Available Abstract Background Post-translational protein modification with ubiquitin, or ubiquitylation, is one of the hottest topics in a modern biology due to a dramatic impact on diverse metabolic pathways and involvement in pathogenesis of severe human diseases. A great number of eukaryotic proteins was found to be ubiquitylated. However, data about particular ubiquitylated proteins are rather disembodied. Description To fill a general need for collecting and systematizing experimental data concerning ubiquitylation we have developed a new resource, UbiProt Database, a knowledgebase of ubiquitylated proteins. The database contains retrievable information about overall characteristics of a particular protein, ubiquitylation features, related ubiquitylation and de-ubiquitylation machinery and literature references reflecting experimental evidence of ubiquitylation. UbiProt is available at http://ubiprot.org.ru for free. Conclusion UbiProt Database is a public resource offering comprehensive information on ubiquitylated proteins. The resource can serve as a general reference source both for researchers in ubiquitin field and those who deal with particular ubiquitylated proteins which are of their interest. Further development of the UbiProt Database is expected to be of common interest for research groups involved in studies of the ubiquitin system.

  3. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  4. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  5. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  6. NVST Data Archiving System Based On FastBit NoSQL Database

    Science.gov (United States)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  7. Final Report on Atomic Database Project

    International Nuclear Information System (INIS)

    Yuan, J.; Gui, Z.; Moses, G.A.

    2006-01-01

    Atomic physics in hot dense plasmas is essential for understanding the radiative properties of plasmas either produced terrestrially such as in fusion energy research or in space such as the study of the core of the sun. Various kinds of atomic data are needed for spectrum analysis or for radiation hydrodynamics simulations. There are many atomic databases accessible publicly through the web, such as CHIANTI (an atomic database for spectroscopic diagnostics for astrophysical plasmas) from Naval Research Laboratory [1], collaborative development of TOPbase (The Opacity Project for astrophysically abundant elements) [2], NIST atomic spectra database from NIST [3], TOPS Opacities from Los Alamos National Laboratory [4], etc. Most of these databases are specific to astrophysics, which provide energy levels, oscillator strength f and photoionization cross sections for astrophysical elements ( Z=1-26). There are abundant spectrum data sources for spectral analysis of low Z elements. For opacities used for radiation transport, TOPS Opacities from LANL is the most valuable source. The database provides mixed opacities from element for H (Z=1) to Zn (Z=30) The data in TOPS Opacities is calculated by the code LEDCOP. In the Fusion Technology Institute, we also have developed several different models to calculate atomic data and opacities, such as the detailed term accounting model (DTA) and the unresolved transition array (UTA) model. We use the DTA model for low-Z materials since an enormous number of transitions need to be computed for medium or high-Z materials. For medium and high Z materials, we use the UTA model which simulates the enormous number of transitions by using a single line profile to represent a collection of transition arrays. These models have been implemented in our computing code JATBASE and RSSUTA. For plasma populations, two models are used in JATBASE, one is the local thermodynamic equilibrium (LTE) model and the second is the non-LTE model. For the

  8. TOPDOM: database of conservatively located domains and motifs in proteins.

    Science.gov (United States)

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. Atlas of Iberian water beetles (ESACIB database).

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A; Ribera, Ignacio

    2015-01-01

    The ESACIB ('EScarabajos ACuáticos IBéricos') database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the "Atlas de los Coleópteros Acuáticos de España Peninsular". In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format.

  10. Atlas of Iberian water beetles (ESACIB database)

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A.; Ribera, Ignacio

    2015-01-01

    Abstract The ESACIB (‘EScarabajos ACuáticos IBéricos’) database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the “Atlas de los Coleópteros Acuáticos de España Peninsular”. In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format. PMID:26448717

  11. Nuclear Data and Reaction Rate Databases in Nuclear Astrophysics

    Science.gov (United States)

    Lippuner, Jonas

    2018-06-01

    Astrophysical simulations and models require a large variety of micro-physics data, such as equation of state tables, atomic opacities, properties of nuclei, and nuclear reaction rates. Some of the required data is experimentally accessible, but the extreme conditions present in many astrophysical scenarios cannot be reproduced in the laboratory and thus theoretical models are needed to supplement the empirical data. Collecting data from various sources and making them available as a database in a unified format is a formidable task. I will provide an overview of the data requirements in astrophysics with an emphasis on nuclear astrophysics. I will then discuss some of the existing databases, the science they enable, and their limitations. Finally, I will offer some thoughts on how to design a useful database.

  12. Search for neutrinos from transient sources with the ANTARES telescope and optical follow-up observations

    International Nuclear Information System (INIS)

    Ageron, Michel; Al Samarai, Imen; Akerlof, Carl; Basa, Stéphane; Bertin, Vincent; Boer, Michel; Brunner, Juergen; Busto, Jose; Dornic, Damien; Klotz, Alain; Schussler, Fabian; Vallage, Bertrand; Vecchi, Manuela; Zheng, Weikang

    2012-01-01

    The ANTARES telescope is well suited to detect neutrinos produced in astrophysical transient sources as it can observe a full hemisphere of the sky at all the times with a duty cycle close to unity and an angular resolution better than 0.5°. Potential sources include gamma-ray bursts (GRBs), core collapse supernovae (SNe), and flaring active galactic nuclei (AGNs). To enhance the sensitivity of ANTARES to such sources, a new detection method based on coincident observations of neutrinos and optical signals has been developed. A fast online muon track reconstruction is used to trigger a network of small automatic optical telescopes. Such alerts are generated one or two times per month for special events such as two or more neutrinos coincident in time and direction or single neutrinos of very high energy. Since February 2009, ANTARES has sent 37 alert triggers to the TAROT and ROTSE telescope networks, 27 of them have been followed. First results on the optical images analysis to search for GRBs are presented.

  13. Search for neutrinos from transient sources with the ANTARES telescope and optical follow-up observations

    Science.gov (United States)

    Ageron, Michel; Al Samarai, Imen; Akerlof, Carl; Basa, Stéphane; Bertin, Vincent; Boer, Michel; Brunner, Juergen; Busto, Jose; Dornic, Damien; Klotz, Alain; Schussler, Fabian; Vallage, Bertrand; Vecchi, Manuela; Zheng, Weikang

    2012-11-01

    The ANTARES telescope is well suited to detect neutrinos produced in astrophysical transient sources as it can observe a full hemisphere of the sky at all the times with a duty cycle close to unity and an angular resolution better than 0.5°. Potential sources include gamma-ray bursts (GRBs), core collapse supernovae (SNe), and flaring active galactic nuclei (AGNs). To enhance the sensitivity of ANTARES to such sources, a new detection method based on coincident observations of neutrinos and optical signals has been developed. A fast online muon track reconstruction is used to trigger a network of small automatic optical telescopes. Such alerts are generated one or two times per month for special events such as two or more neutrinos coincident in time and direction or single neutrinos of very high energy. Since February 2009, ANTARES has sent 37 alert triggers to the TAROT and ROTSE telescope networks, 27 of them have been followed. First results on the optical images analysis to search for GRBs are presented.

  14. Search for neutrinos from transient sources with the ANTARES telescope and optical follow-up observations

    Energy Technology Data Exchange (ETDEWEB)

    Ageron, Michel [CPPM, CNRS/IN2P3 - Universite de Mediterranee, 163 avenue de Luminy, 13288 Marseille Cedex 09 (France); Al Samarai, Imen, E-mail: samarai@cppm.in2p3.fr [CPPM, CNRS/IN2P3 - Universite de Mediterranee, 163 avenue de Luminy, 13288 Marseille Cedex 09 (France); Akerlof, Carl [Randall Laboratory of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109-1040 (United States); Basa, Stephane [LAM, BP8, Traverse du siphon, 13376 Marseille Cedex 12 (France); Bertin, Vincent [CPPM, CNRS/IN2P3 - Universite de Mediterranee, 163 avenue de Luminy, 13288 Marseille Cedex 09 (France); Boer, Michel [OHP, 04870 Saint Michel de l' Observatoire (France); Brunner, Juergen; Busto, Jose; Dornic, Damien [CPPM, CNRS/IN2P3 - Universite de Mediterranee, 163 avenue de Luminy, 13288 Marseille Cedex 09 (France); Klotz, Alain [OHP, 04870 Saint Michel de l' Observatoire (France); IRAP, 9 avenue du colonel Roche, 31028 Toulouse Cedex 4 (France); Schussler, Fabian; Vallage, Bertrand [CEA-IRFU, centre de Saclay, 91191 Gif-sur-Yvette (France); Vecchi, Manuela [CPPM, CNRS/IN2P3 - Universite de Mediterranee, 163 avenue de Luminy, 13288 Marseille Cedex 09 (France); Zheng, Weikang [Randall Laboratory of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109-1040 (United States)

    2012-11-11

    The ANTARES telescope is well suited to detect neutrinos produced in astrophysical transient sources as it can observe a full hemisphere of the sky at all the times with a duty cycle close to unity and an angular resolution better than 0.5 Degree-Sign . Potential sources include gamma-ray bursts (GRBs), core collapse supernovae (SNe), and flaring active galactic nuclei (AGNs). To enhance the sensitivity of ANTARES to such sources, a new detection method based on coincident observations of neutrinos and optical signals has been developed. A fast online muon track reconstruction is used to trigger a network of small automatic optical telescopes. Such alerts are generated one or two times per month for special events such as two or more neutrinos coincident in time and direction or single neutrinos of very high energy. Since February 2009, ANTARES has sent 37 alert triggers to the TAROT and ROTSE telescope networks, 27 of them have been followed. First results on the optical images analysis to search for GRBs are presented.

  15. Pesticide Information Sources in the United States.

    Science.gov (United States)

    Alston, Patricia Gayle

    1992-01-01

    Presents an overview of electronic and published sources on pesticides. Includes sources such as databases, CD-ROMs, books, journals, brochures, pamphlets, fact sheets, hotlines, courses, electronic mail, and electronic bulletin boards. (MCO)

  16. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  17. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  18. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  19. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  20. EMEN2: an object oriented database and electronic lab notebook.

    Science.gov (United States)

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J

    2013-02-01

    Transmission electron microscopy and associated methods, such as single particle analysis, two-dimensional crystallography, helical reconstruction, and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source.

  1. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  2. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  3. ALMA OBSERVATIONS OF THE OUTFLOW FROM SOURCE I IN THE ORION-KL REGION

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, Luis A.; Rodriguez, Luis F.; Loinard, Laurent [Centro de Radioastronomia y Astrofisica, UNAM, Apdo. Postal 3-72 (Xangari), 58089 Morelia, Michoacan (Mexico); Schmid-Burgk, Johannes; Menten, Karl M. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, 53121 Bonn (Germany); Curiel, Salvador [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico, Ap. 70-264, 04510 DF (Mexico)

    2012-07-20

    In this Letter, we present sensitive millimeter SiO (J = 5-4; {nu} = 0) line observations of the outflow arising from the enigmatic object Orion Source I made with the Atacama Large Millimeter/Submillimeter Array (ALMA). The observations reveal that at scales of a few thousand AU, the outflow has a marked 'butterfly' morphology along a northeast-southwest axis. However, contrary to what is found in the SiO and H{sub 2}O maser observations at scales of tens of AU, the blueshifted radial velocities of the moving gas are found to the northwest, while the redshifted velocities are in the southeast. The ALMA observations are complemented with SiO (J = 8-7; {nu} = 0) maps (with a similar spatial resolution) obtained with the Submillimeter Array. These observations also show a similar morphology and velocity structure in this outflow. We discuss some possibilities to explain these differences at small and large scales across the flow.

  4. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults.

    Science.gov (United States)

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2016-10-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.

  5. Analyzing GAIAN Database (GaianDB) on a Tactical Network

    Science.gov (United States)

    2015-11-30

    databases, and other files, and exposes them as 1 unified structured query language ( SQL )-compliant data source. This “store locally query anywhere...UDP server that could communicate directly with the CSRs via the CSR’s serial port. However, GAIAN has over 800,000 lines of source code. It...management, by which all would have to be modified to communicate with our server and maintain utility. Not only did we quickly realize that this

  6. Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations

    Science.gov (United States)

    Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth

    2015-04-01

    The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as

  7. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics

    OpenAIRE

    Verma, Mohit; Kumar, Vinay; Patel, Ravi K.; Garg, Rohini; Jain, Mukesh

    2015-01-01

    Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB), which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database fea...

  8. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  9. Interplanetary scintillation observations of an unbiased sample of 90 Ooty occultation radio sources at 326.5 MHz

    International Nuclear Information System (INIS)

    Banhatti, D.G.; Ananthakrishnan, S.

    1989-01-01

    We present 327-MHz interplanetary scintillation (IPS) observations of an unbiased sample of 90 extragalactic radio sources selected from the ninth Ooty lunar occultation list. The sources are brighter than 0.75 Jy at 327 MHz and lie outside the galactic plane. We derive values, the fraction of scintillating flux density, and the equivalent Gaussian diameter for the scintillating structure. Various correlations are found between the observed parameters. In particular, the scintillating component weakens and broadens with increasing largest angular size, and stronger scintillators have more compact scintillating components. (author)

  10. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): A database of permitted public-supply wells, surface-water intakes, and systems in the United States

    Science.gov (United States)

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The U.S. Geological Survey (USGS) has developed a database containing information about wells, surface-water intakes, and distribution systems that are part of public water systems across the United States, its territories, and possessions. Programs of the USGS such as the National Water Census, the National Water Use Information Program, and the National Water-Quality Assessment Program all require a complete and current inventory of public water systems, the sources of water used by those systems, and the size of populations served by the systems across the Nation. Although the U.S. Environmental Protection Agency’s Safe Drinking Water Information System (SDWIS) database already exists as the primary national Federal database for information on public water systems, the Public-Supply Database (PSDB) was developed to add value to SDWIS data with enhanced location and ancillary information, and to provide links to other databases, including the USGS’s National Water Information System (NWIS) database.

  11. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  12. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  13. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  14. Updated Palaeotsunami Database for Aotearoa/New Zealand

    Science.gov (United States)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a

  15. DGIdb 3.0: a redesign and expansion of the drug-gene interaction database.

    Science.gov (United States)

    Cotto, Kelsy C; Wagner, Alex H; Feng, Yang-Yang; Kiwala, Susanna; Coffman, Adam C; Spies, Gregory; Wollam, Alex; Spies, Nicholas C; Griffith, Obi L; Griffith, Malachi

    2018-01-04

    The drug-gene interaction database (DGIdb, www.dgidb.org) consolidates, organizes and presents drug-gene interactions and gene druggability information from papers, databases and web resources. DGIdb normalizes content from 30 disparate sources and allows for user-friendly advanced browsing, searching and filtering for ease of access through an intuitive web user interface, application programming interface (API) and public cloud-based server image. DGIdb v3.0 represents a major update of the database. Nine of the previously included 24 sources were updated. Six new resources were added, bringing the total number of sources to 30. These updates and additions of sources have cumulatively resulted in 56 309 interaction claims. This has also substantially expanded the comprehensive catalogue of druggable genes and anti-neoplastic drug-gene interactions included in the DGIdb. Along with these content updates, v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times and upgrading the underlying web application framework. In addition, the expanded API features new endpoints which allow users to extract more detailed information about queried drugs, genes and drug-gene interactions, including listings of PubMed IDs, interaction type and other interaction metadata.

  16. Extended emission sources observed via two-proton correlations

    International Nuclear Information System (INIS)

    Awes, T.C.; Ferguson, R.L.; Obenshain, F.E.

    1988-01-01

    Two-proton correlations were measured as a function of the total energy and relative momentum of the proton. The correlation is analyzed for different orientations of the relative momentum, which allows information on the size and lifetime of the emission source to be extracted. The most energetic particles are emitted from a short- lived source of compound nucleus dimensions while the lower energy protons appear to be emitted from a source considerably larger than the compound nucleus. 9 refs., 3 figs

  17. Database system of geological information for geological evaluation base of NPP sites(I)

    International Nuclear Information System (INIS)

    Lim, C. B.; Choi, K. R.; Sim, T. M.; No, M. H.; Lee, H. W.; Kim, T. K.; Lim, Y. S.; Hwang, S. K.

    2002-01-01

    This study aims to provide database system for site suitability analyses of geological information and a processing program for domestic NPP site evaluation. This database system program includes MapObject provided by ESRI and Spread 3.5 OCX, and is coded with Visual Basic language. Major functions of the systematic database program includes vector and raster farmat topographic maps, database design and application, geological symbol plot, the database search for the plotted geological symbol, and so on. The program can also be applied in analyzing not only for lineament trends but also for statistic treatment from geologically site and laboratory information and sources in digital form and algorithm, which is usually used internationally

  18. A Community Data Model for Hydrologic Observations

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis

  19. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  20. Nine years of global hydrocarbon emissions based on source inversion of OMI formaldehyde observations

    NARCIS (Netherlands)

    Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean François; De Smedt, Isabelle; Van Roozendael, Michel; Van Der Werf, Guido R.; Wiedinmyer, Christine; Kaiser, Johannes W.; Sindelarova, Katerina; Guenther, Alex

    2016-01-01

    As formaldehyde (HCHO) is a high-yield product in the oxidation of most volatile organic compounds (VOCs) emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The

  1. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  2. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  3. Identification of tropospheric emissions sources from satellite observations: Synergistic use of HCHO, NO2, and SO2 trace gas measurements

    Science.gov (United States)

    Marbach, T.; Beirle, S.; Khokhar, F.; Platt, U.

    2005-12-01

    We present case studies for combined HCHO, NO2, and SO2 satellite observations, derived from GOME measurements. Launched on the ERS-2 satellite in April 1995, GOME has already performed continuous operations over 8 years providing global observations of the different trace gases. In this way, satellite observations provide unique opportunities for the identifications of trace gas sources. The satellite HCHO observations provide information concerning the localization of biomass burning (intense source of HCHO). The principal biomass burning areas can be observed in the Amazon basin region and in central Africa Weaker HCHO sources (south east of the United States, northern part of the Amazon basin, and over the African tropical forest), not correlated with biomass burning, could be due to biogenic isoprene emissions. The HCHO data can be compared with NO2 and SO2 results to identify more precisely the tropospheric sources (biomass burning events, human activities, additional sources like volcanic emissions). Biomass burning are important tropospheric sources for both HCHO and NO2. Nevertheless HCHO reflects more precisely the biomass burning as it appears in all biomass burning events. NO2 correlate with HCHO over Africa (grassland fires) but not over Indonesia (forest fires). In south America, an augmentation of the NO2 concentrations can be observed with the fire shift from the forest to grassland vegetation. So there seems to be a dependence between the NO2 emissions during biomass burning and the vegetation type. Other high HCHO, SO2, and NO2 emissions can be correlated with climatic events like the El Nino in 1997, which induced dry conditions in Indonesia causing many forest fires.

  4. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  5. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  6. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  7. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  8. Disaster Debris Recovery Database - Landfills

    Science.gov (United States)

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  9. Disaster Debris Recovery Database - Recovery

    Science.gov (United States)

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  10. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  11. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    Science.gov (United States)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  12. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  13. The eNanoMapper database for nanomaterial safety information

    Directory of Open Access Journals (Sweden)

    Nina Jeliazkova

    2015-07-01

    Full Text Available Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs. Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API, and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state

  14. The volatile compound BinBase mass spectral database.

    Science.gov (United States)

    Skogerson, Kirsten; Wohlgemuth, Gert; Barupal, Dinesh K; Fiehn, Oliver

    2011-08-04

    Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. The volatile compound BinBase (vocBinBase) is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity) from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species). Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http://vocbinbase.fiehnlab.ucdavis.edu). The Bin

  15. The volatile compound BinBase mass spectral database

    Directory of Open Access Journals (Sweden)

    Barupal Dinesh K

    2011-08-01

    Full Text Available Abstract Background Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. Description The volatile compound BinBase (vocBinBase is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species. Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http

  16. National Solar Radiation Database 1991-2010 Update: User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Wilcox, S. M.

    2012-08-01

    This user's manual provides information on the updated 1991-2010 National Solar Radiation Database. Included are data format descriptions, data sources, production processes, and information about data uncertainty.

  17. The ChArMEx database

    Science.gov (United States)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  18. The Danish ventral hernia database

    DEFF Research Database (Denmark)

    Helgstrand, Frederik; Jorgensen, Lars Nannestad

    2016-01-01

    Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation of ...... of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications in recent years.......Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation...... to the surgical repair are recorded. Data registration is mandatory. Data may be merged with other Danish health registries and information from patient questionnaires or clinical examinations. Descriptive data: More than 37,000 operations have been registered. Data have demonstrated high agreement with patient...

  19. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  20. Searching mixed DNA profiles directly against profile databases.

    Science.gov (United States)

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Comparison of Various Databases for Estimation of Dietary Polyphenol Intake in the Population of Polish Adults

    Directory of Open Access Journals (Sweden)

    Anna M. Witkowska

    2015-11-01

    Full Text Available The primary aim of the study was to estimate the consumption of polyphenols in a population of 6661 subjects aged between 20 and 74 years representing a cross-section of the Polish society, and the second objective was to compare the intakes of flavonoids calculated on the basis of the two commonly used databases. Daily food consumption data were collected in 2003–2005 using a single 24-hour dietary recall. Intake of total polyphenols was estimated using an online Phenol-Explorer database, and flavonoid intake was determined using following data sources: the United States Department of Agriculture (USDA database combined of flavonoid and isoflavone databases, and the Phenol-Explorer database. Total polyphenol intake, which was calculated with the Phenol-Explorer database, was 989 mg/day with the major contributions of phenolic acids 556 mg/day and flavonoids 403.5 mg/day. The flavonoid intake calculated on the basis of the USDA databases was 525 mg/day. This study found that tea is the primary source of polyphenols and flavonoids for the studied population, including mainly flavanols, while coffee is the most important contributor of phenolic acids, mostly hydroxycinnamic acids. Our study also demonstrated that flavonoid intakes estimated according to various databases may substantially differ. Further work should be undertaken to expand polyphenol databases to better reflect their food contents.

  2. News from the Library: Looking for materials properties? Find the answer in CINDAS databases

    CERN Multimedia

    CERN Library

    2012-01-01

    Materials properties databases are a crucial source of information when doing research in Materials Science. The creation and regular updating of such databases requires identification and collection of relevant worldwide scientific and technical literature, followed by the compilation, critical evaluation, correlation and synthesis of both existing and new experimental data.   The Center for Information and Numerical Data Analysis and Synthesis (CINDAS) at Purdue University produces several databases on the properties and behaviour of materials. The databases include: - ASMD (Aerospace Structural Metals Database) which gives access to approximately 80,000 data curves on over 220 alloys used in the aerospace and other industries - the Microelectronics Packaging Materials Database (MPMD), providing data and information on the thermal, mechanical, electrical and physical properties of electronics packaging materials, and - the Thermophysical Properties of Matter Database (TPMD), covering the...

  3. The ChArMEx database

    Science.gov (United States)

    Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2013-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products

  4. Web Exploration Tools for a Fast Federated Optical Survey Database

    Science.gov (United States)

    Humphreys, Roberta M.

    2000-01-01

    We implemented several new web-based tools to improve the efficiency and versatility of access to the APS Catalog of the POSS I (Palomar Observatory-National Geographic Sky Survey) and its associated image database. The most important addition was a federated database system to link the APS Catalog and image database into one Internet-accessible database. With the FDBS, the queries and transactions on the integrated database are performed as if it were a single database. We installed Myriad the FDBS developed by Professor Jaideep Srivastava and members of his group in the University of Minnesota Computer Science Department. It is the first system to provide schema integration, query processing and optimization, and transaction management capabilities in a single framework. The attached figure illustrates the Myriad architecture. The FDBS permits horizontal access to the data, not just vertical. For example, for the APS, queries can be made not only by sky position, but also by any parameter present in either of the databases. APS users will be able to produce an image of all the blue galaxies and stellar sources for comparison with x-ray source error ellipses from AXAF (X Ray Astrophysics Facility) (Chandra) for example. The FDBS is now available as a beta release with the appropriate query forms at our web site. While much of our time was occupied with adapting Myriad to the APS environment, we also made major changes in Star Base, our DBMS for the Catalog, at the web interface to improve its efficiency for issuing and processing queries. Star Base is now three times faster for large queries. Improvements were also made at the web end of the image database for faster access; although work still needs to be done to the image database itself for more efficient return with the FDBS. During the past few years, we made several improvements to the database pipeline that creates the individual plate databases queries by StarBase. The changes include improved positions

  5. Intrinsic Radiation Source Generation with the ISC Package: Data Comparisons and Benchmarking

    International Nuclear Information System (INIS)

    Solomon, Clell J. Jr.

    2012-01-01

    The characterization of radioactive emissions from unstable isotopes (intrinsic radiation) is necessary for shielding and radiological-dose calculations from radioactive materials. While most radiation transport codes, e.g., MCNP [X-5 Monte Carlo Team, 2003], provide the capability to input user prescribed source definitions, such as radioactive emissions, they do not provide the capability to calculate the correct radioactive-source definition given the material compositions. Special modifications to MCNP have been developed in the past to allow the user to specify an intrinsic source, but these modification have not been implemented into the primary source base [Estes et al., 1988]. To facilitate the description of the intrinsic radiation source from a material with a specific composition, the Intrinsic Source Constructor library (LIBISC) and MCNP Intrinsic Source Constructor (MISC) utility have been written. The combination of LIBISC and MISC will be herein referred to as the ISC package. LIBISC is a statically linkable C++ library that provides the necessary functionality to construct the intrinsic-radiation source generated by a material. Furthermore, LIBISC provides the ability use different particle-emission databases, radioactive-decay databases, and natural-abundance databases allowing the user flexibility in the specification of the source, if one database is preferred over others. LIBISC also provides functionality for aging materials and producing a thick-target bremsstrahlung photon source approximation from the electron emissions. The MISC utility links to LIBISC and facilitates the description of intrinsic-radiation sources into a format directly usable with the MCNP transport code. Through a series of input keywords and arguments the MISC user can specify the material, age the material if desired, and produce a source description of the radioactive emissions from the material in an MCNP readable format. Further details of using the MISC utility can

  6. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  7. Catalog of Federal Funding Sources for Watershed Protection

    Data.gov (United States)

    U.S. Environmental Protection Agency — Catalog of Federal Funding Sources for Watershed Protection Web site is a searchable database of financial assistance sources (grants, loans) available to fund a...

  8. A database and API for variation, dense genotyping and resequencing data

    Directory of Open Access Journals (Sweden)

    Flicek Paul

    2010-05-01

    Full Text Available Abstract Background Advances in sequencing and genotyping technologies are leading to the widespread availability of multi-species variation data, dense genotype data and large-scale resequencing projects. The 1000 Genomes Project and similar efforts in other species are challenging the methods previously used for storage and manipulation of such data necessitating the redesign of existing genome-wide bioinformatics resources. Results Ensembl has created a database and software library to support data storage, analysis and access to the existing and emerging variation data from large mammalian and vertebrate genomes. These tools scale to thousands of individual genome sequences and are integrated into the Ensembl infrastructure for genome annotation and visualisation. The database and software system is easily expanded to integrate both public and non-public data sources in the context of an Ensembl software installation and is already being used outside of the Ensembl project in a number of database and application environments. Conclusions Ensembl's powerful, flexible and open source infrastructure for the management of variation, genotyping and resequencing data is freely available at http://www.ensembl.org.

  9. Upwelling to Outflowing Oxygen Ions at Auroral Latitudes during Quiet Times: Exploiting a New Satellite Database

    Science.gov (United States)

    Redmon, Robert J.

    The mechanisms by which thermal O+ escapes from the top of the ionosphere and into the magnetosphere are not fully understood even with 30 years of active research. This thesis introduces a new database, builds a simulation framework around a thermospheric model and exploits these tools to gain new insights into the study of O+ ion outflows. A dynamic auroral boundary identification system is developed using Defense Meteorological Satellite Program (DMSP) spacecraft observations at 850 km to build a database characterizing the oxygen source region. This database resolves the ambiguity of the expansion and contraction of the auroral zone. Mining this new dataset, new understanding is revealed. We describe the statistical trajectory of the cleft ion fountain return flows over the polar cap as a function of activity and the orientation of the interplanetary magnetic field y-component. A substantial peak in upward moving O+ in the morning hours is discovered. Using published high altitude data we demonstrate that between 850 and 6000 km altitude, O+ is energized predominantly through transverse heating; and acceleration in this altitude region is relatively more important in the cusp than at midnight. We compare data with a thermospheric model to study the effects of solar irradiance, electron precipitation and neutral wind on the distribution of upward O+ at auroral latitudes. EUV irradiance is shown to play a dominant role in establishing a dawn-focused source population of upwelling O+ that is responsible for a pre-noon feature in escaping O+ fluxes. This feature has been corroborated by observations on platforms including the Dynamics Explorer 1 (DE-1), Polar, and Fast Auroral Snapshot SnapshoT (FAST) spacecraft. During quiet times our analysis shows that the neutral wind is more important than electron precipitation in establishing the dayside O+ upwelling distribution. Electron precipitation is found to play a relatively modest role in controlling dayside, and a

  10. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  11. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  12. The formation and design of the 'Acute Admission Database'- a database including a prospective, observational cohort of 6279 patients triaged in the emergency department in a larger Danish hospital

    Directory of Open Access Journals (Sweden)

    Barfod Charlotte

    2012-04-01

    Full Text Available Abstract Background Management and care of the acutely ill patient has improved over the last years due to introduction of systematic assessment and accelerated treatment protocols. We have, however, sparse knowledge of the association between patient status at admission to hospital and patient outcome. A likely explanation is the difficulty in retrieving all relevant information from one database. The objective of this article was 1 to describe the formation and design of the 'Acute Admission Database', and 2 to characterize the cohort included. Methods All adult patients triaged at the Emergency Department at Hillerød Hospital and admitted either to the observationary unit or to a general ward in-hospital were prospectively included during a period of 22 weeks. The triage system used was a Danish adaptation of the Swedish triage system, ADAPT. Data from 3 different data sources was merged using a unique identifier, the Central Personal Registry number; 1 Data from patient admission; time and date, vital signs, presenting complaint and triage category, 2 Blood sample results taken at admission, including a venous acid-base status, and 3 Outcome measures, e.g. length of stay, admission to Intensive Care Unit, and mortality within 7 and 28 days after admission. Results In primary triage, patients were categorized as red (4.4%, orange (25.2%, yellow (38.7% and green (31.7%. Abnormal vital signs were present at admission in 25% of the patients, most often temperature (10.5%, saturation of peripheral oxygen (9.2%, Glasgow Coma Score (6.6% and respiratory rate (4.8%. A venous acid-base status was obtained in 43% of all patients. The majority (78% had a pH within the normal range (7.35-7.45, 15% had acidosis (pH 7.45. Median length of stay was 2 days (range 1-123. The proportion of patients admitted to Intensive Care Unit was 1.6% (95% CI 1.2-2.0, 1.8% (95% CI 1.5-2.2 died within 7 days, and 4.2% (95% CI 3.7-4.7 died within 28 days after admission

  13. DEEP SPITZER OBSERVATIONS OF INFRARED-FAINT RADIO SOURCES: HIGH-REDSHIFT RADIO-LOUD ACTIVE GALACTIC NUCLEI?

    International Nuclear Information System (INIS)

    Norris, Ray P.; Mao, Minnie; Afonso, Jose; Cava, Antonio; Farrah, Duncan; Oliver, Seb; Huynh, Minh T.; Mauduit, Jean-Christophe; Surace, Jason; Ivison, R. J.; Jarvis, Matt; Lacy, Mark; Maraston, Claudia; Middelberg, Enno; Seymour, Nick

    2011-01-01

    Infrared-faint radio sources (IFRSs) are a rare class of objects which are relatively bright at radio wavelengths but very faint at infrared and optical wavelengths. Here we present sensitive near-infrared observations of a sample of these sources taken as part of the Spitzer Extragalactic Representative Volume Survey. Nearly all the IFRSs are undetected at a level of ∼1 μJy in these new deep observations, and even the detections are consistent with confusion with unrelated galaxies. A stacked image implies that the median flux density is S 3.6μm ∼ 0.2 μJy or less, giving extreme values of the radio-infrared flux density ratio. Comparison of these objects with known classes of object suggests that the majority are probably high-redshift radio-loud galaxies, possibly suffering from significant dust extinction.

  14. FAINT RADIO-SOURCES WITH PEAKED SPECTRA .1. VLA OBSERVATIONS OF A NEW SAMPLE WITH INTERMEDIATE FLUX-DENSITIES

    NARCIS (Netherlands)

    SNELLEN, IAG; ZHANG, M; SCHILIZZI, RT; ROTTGERING, HJA; DEBRUYN, AG; MILEY, GK

    We present 2 and 20 cm observations with the VLA of 25 candidate peaked spectrum radio sources. These data combined with those from earlier surveys have allowed us to construct radio spectra spanning a range of frequency from 0.3 to 15 GHz. Ten of the 25 sources are found to be variable with no

  15. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  16. Construction of crystal structure prototype database: methods and applications.

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  17. Construction of crystal structure prototype database: methods and applications

    International Nuclear Information System (INIS)

    Su, Chuanxun; Lv, Jian; Wang, Hui; Wang, Yanchao; Ma, Yanming; Li, Quan; Zhang, Lijun

    2017-01-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery. (paper)

  18. Global Partitioning of NOx Sources Using Satellite Observations: Relative Roles of Fossil Fuel Combustion, Biomass Burning and Soil Emissions

    Science.gov (United States)

    Jaegle, Lyatt; Steinberger, Linda; Martin, Randall V.; Chance, Kelly

    2005-01-01

    This document contains the following abstract for the paper "Global partitioning of NOx sources using satellite observations: Relative roles of fossil fuel combustion, biomass burning and soil emissions." Satellite observations have been used to provide important new information about emissions of nitrogen oxides. Nitrogen oxides (NOx) are significant in atmospheric chemistry, having a role in ozone air pollution, acid deposition and climate change. We know that human activities have led to a three- to six-fold increase in NOx emissions since pre-industrial times, and that there are three main surface sources of NOx: fuel combustion, large-scale fires, and microbial soil processes. How each of these sources contributes to the total NOx emissions is subject to some doubt, however. The problem is that current NOx emission inventories rely on bottom-up approaches, compiling large quantities of statistical information from diverse sources such as fuel and land use, agricultural data, and estimates of burned areas. This results in inherently large uncertainties. To overcome this, Lyatt Jaegle and colleagues from the University of Washington, USA, used new satellite observations from the Global Ozone Monitoring Experiment (GOME) instrument. As the spatial and seasonal distribution of each of the sources of NOx can be clearly mapped from space, the team could provide independent topdown constraints on the individual strengths of NOx sources, and thus help resolve discrepancies in existing inventories. Jaegle's analysis of the satellite observations, presented at the recent Faraday Discussion on "Atmospheric Chemistry", shows that fuel combustion dominates emissions at northern mid-latitudes, while fires are a significant source in the Tropics. Additionally, she discovered a larger than expected role for soil emissions, especially over agricultural regions with heavy fertilizer use. Additional information is included in the original extended abstract.

  19. ACToR Chemical Structure processing using Open Source ...

    Science.gov (United States)

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  20. HCVpro: Hepatitis C virus protein interaction database

    KAUST Repository

    Kwofie, Samuel K.

    2011-12-01

    It is essential to catalog characterized hepatitis C virus (HCV) protein-protein interaction (PPI) data and the associated plethora of vital functional information to augment the search for therapies, vaccines and diagnostic biomarkers. In furtherance of these goals, we have developed the hepatitis C virus protein interaction database (HCVpro) by integrating manually verified hepatitis C virus-virus and virus-human protein interactions curated from literature and databases. HCVpro is a comprehensive and integrated HCV-specific knowledgebase housing consolidated information on PPIs, functional genomics and molecular data obtained from a variety of virus databases (VirHostNet, VirusMint, HCVdb and euHCVdb), and from BIND and other relevant biology repositories. HCVpro is further populated with information on hepatocellular carcinoma (HCC) related genes that are mapped onto their encoded cellular proteins. Incorporated proteins have been mapped onto Gene Ontologies, canonical pathways, Online Mendelian Inheritance in Man (OMIM) and extensively cross-referenced to other essential annotations. The database is enriched with exhaustive reviews on structure and functions of HCV proteins, current state of drug and vaccine development and links to recommended journal articles. Users can query the database using specific protein identifiers (IDs), chromosomal locations of a gene, interaction detection methods, indexed PubMed sources as well as HCVpro, BIND and VirusMint IDs. The use of HCVpro is free and the resource can be accessed via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/. © 2011 Elsevier B.V.