WorldWideScience

Sample records for multiple source databases

  1. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  2. TRAM (Transcriptome Mapper: database-driven creation and analysis of transcriptome maps from multiple sources

    Directory of Open Access Journals (Sweden)

    Danieli Gian

    2011-02-01

    clusters with differential expression during the differentiation toward megakaryocyte were identified. Conclusions TRAM is designed to create, and statistically analyze, quantitative transcriptome maps, based on gene expression data from multiple sources. The release includes FileMaker Pro database management runtime application and it is freely available at http://apollo11.isto.unibo.it/software/, along with preconfigured implementations for mapping of human, mouse and zebrafish transcriptomes.

  3. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  4. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  5. Technical Note: A new global database of trace gases and aerosols from multiple sources of high vertical resolution measurements

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2008-09-01

    Full Text Available A new database of trace gases and aerosols with global coverage, derived from high vertical resolution profile measurements, has been assembled as a collection of binary data files; hereafter referred to as the "Binary DataBase of Profiles" (BDBP. Version 1.0 of the BDBP, described here, includes measurements from different satellite- (HALOE, POAM II and III, SAGE I and II and ground-based measurement systems (ozonesondes. In addition to the primary product of ozone, secondary measurements of other trace gases, aerosol extinction, and temperature are included. All data are subjected to very strict quality control and for every measurement a percentage error on the measurement is included. To facilitate analyses, each measurement is added to 3 different instances (3 different grids of the database where measurements are indexed by: (1 geographic latitude, longitude, altitude (in 1 km steps and time, (2 geographic latitude, longitude, pressure (at levels ~1 km apart and time, (3 equivalent latitude, potential temperature (8 levels from 300 K to 650 K and time.

    In contrast to existing zonal mean databases, by including a wider range of measurement sources (both satellite and ozonesondes, the BDBP is sufficiently dense to permit calculation of changes in ozone by latitude, longitude and altitude. In addition, by including other trace gases such as water vapour, this database can be used for comprehensive radiative transfer calculations. By providing the original measurements rather than derived monthly means, the BDBP is applicable to a wider range of applications than databases containing only monthly mean data. Monthly mean zonal mean ozone concentrations calculated from the BDBP are compared with the database of Randel and Wu, which has been used in many earlier analyses. As opposed to that database which is generated from regression model fits, the BDBP uses the original (quality controlled measurements with no smoothing applied in any

  6. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  7. Free software and open source databases

    Directory of Open Access Journals (Sweden)

    Napoleon Alexandru SIRITEANU

    2006-01-01

    Full Text Available The emergence of free/open source software -FS/OSS- enterprises seeks to push software development out of the academic stream into the commercial mainstream, and as a result, end-user applications such as open source database management systems (PostgreSQL, MySQL, Firebird are becoming more popular. Companies like Sybase, Oracle, Sun, IBM are increasingly implementing open source strategies and porting programs/applications into the Linux environment. Open source software is redefining the software industry in general and database development in particular.

  8. DEIMOS – an Open Source Image Database

    Directory of Open Access Journals (Sweden)

    M. Blazek

    2011-12-01

    Full Text Available The DEIMOS (DatabasE of Images: Open Source is created as an open-source database of images and videos for testing, verification and comparing of various image and/or video processing techniques such as enhancing, compression and reconstruction. The main advantage of DEIMOS is its orientation to various application fields – multimedia, television, security, assistive technology, biomedicine, astronomy etc. The DEIMOS is/will be created gradually step-by-step based upon the contributions of team members. The paper is describing basic parameters of DEIMOS database including application examples.

  9. Data analysis and pattern recognition in multiple databases

    CERN Document Server

    Adhikari, Animesh; Pedrycz, Witold

    2014-01-01

    Pattern recognition in data is a well known classical problem that falls under the ambit of data analysis. As we need to handle different data, the nature of patterns, their recognition and the types of data analyses are bound to change. Since the number of data collection channels increases in the recent time and becomes more diversified, many real-world data mining tasks can easily acquire multiple databases from various sources. In these cases, data mining becomes more challenging for several essential reasons. We may encounter sensitive data originating from different sources - those cannot be amalgamated. Even if we are allowed to place different data together, we are certainly not able to analyse them when local identities of patterns are required to be retained. Thus, pattern recognition in multiple databases gives rise to a suite of new, challenging problems different from those encountered before. Association rule mining, global pattern discovery, and mining patterns of select items provide different...

  10. The Development of Ontology from Multiple Databases

    Science.gov (United States)

    Kasim, Shahreen; Aswa Omar, Nurul; Fudzee, Mohd Farhan Md; Azhar Ramli, Azizul; Aizi Salamat, Mohamad; Mahdin, Hairulnizam

    2017-08-01

    The area of halal industry is the fastest growing global business across the world. The halal food industry is thus crucial for Muslims all over the world as it serves to ensure them that the food items they consume daily are syariah compliant. Currently, ontology has been widely used in computer sciences area such as web on the heterogeneous information processing, semantic web, and information retrieval. However, ontology has still not been used widely in the halal industry. Today, Muslim community still have problem to verify halal status for products in the market especially foods consisting of E number. This research tried to solve problem in validating the halal status from various halal sources. There are various chemical ontology from multilple databases found to help this ontology development. The E numbers in this chemical ontology are codes for chemicals that can be used as food additives. With this E numbers ontology, Muslim community could identify and verify the halal status effectively for halal products in the market.

  11. Indexing Fingerprint Databases Based on Multiple Features

    NARCIS (Netherlands)

    Boer, de Johan; Bazen, Asker M.; Gerez, Sabih H.

    2001-01-01

    In a fingerprint identification system, a person is identified only by his fingerprint. To accomplish this, a database is searched by matching all entries to the given fingerprint. However, the maximum size of the database is limited, since each match takes some amount of time and has a small probab

  12. Linking Multiple Databases: Term Project Using "Sentences" DBMS.

    Science.gov (United States)

    King, Ronald S.; Rainwater, Stephen B.

    This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…

  13. Power source roadmaps using bibliometrics and database tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kostoff, R.N.; Pfeil, K.M. [Office of Naval Research, Arlington, VA (United States); Tshiteya, R. [DDL OMNI Engineering, Mclean, VA (United States); Humenik, J.A. [Noesis Inc., Manassas, VA (United States); Karypis, G. [University of Minnesota, Minneapolis, MN (United States). Computer Science and Engineering Dept.

    2005-04-01

    Database Tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multi-word phrase frequencies and phrase proximities (physical closeness of the multi-word technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a Power Sources database derived from the Science Citation Index. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the Power Sources database, and the phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the Power Sources literature supplemented the DT results with author/journal/institution/country publication and citation data. (author)

  14. Deductive Coordination of Multiple Geospatial Knowledge Sources

    Science.gov (United States)

    Waldinger, R.; Reddy, M.; Culy, C.; Hobbs, J.; Jarvis, P.; Dungan, J. L.

    2002-12-01

    Deductive inference is applied to choreograph the cooperation of multiple knowledge sources to respond to geospatial queries. When no one source can provide an answer, the response may be deduced from pieces of the answer provided by many sources. Examples of sources include (1) The Alexandria Digital Library Gazetteer, a repository that gives the locations for almost six million place names, (2) The Cia World Factbook, an online almanac with basic information about more than 200 countries. (3) The SRI TerraVision 3D Terrain Visualization System, which displays a flight-simulator-like interactive display of geographic data held in a database, (4) The NASA GDACC WebGIS client for searching satellite and other geographic data available through OpenGIS Consortium (OGC) Web Map Servers, and (5) The Northern Arizona University Latitude/Longitude Distance Calculator. Queries are phrased in English and are translated into logical theorems by the Gemini Natural Language Parser. The theorems are proved by SNARK, a first-order-logic theorem prover, in the context of an axiomatic geospatial theory. The theory embodies a representational scheme that takes into account the fact that the same place may have many names, and the same name may refer to many places. SNARK has built-in procedures (RCC8 and the Allen calculus, respectively) for reasoning about spatial and temporal concepts. External knowledge sources may be consulted by SNARK as the proof is in progress, so that most knowledge need not be stored axiomatically. The Open Agent Architecture (OAA) facilitates communication between sources that may be implemented on different machines in different computer languages. An answer to the query, in the form of text or an image, is extracted from the proof. Currently, three-dimensional images are displayed by TerraVision but other displays are possible. The combined system is called Geo-Logica. Some example queries that can be handled by Geo-Logica include: (1) show the

  15. Orthographic and Phonological Neighborhood Databases across Multiple Languages.

    Science.gov (United States)

    Marian, Viorica

    2017-01-01

    The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.

  16. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    Science.gov (United States)

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  17. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    Science.gov (United States)

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  18. Quality Assurance Source Requirements Traceability Database

    Energy Technology Data Exchange (ETDEWEB)

    MURTHY, R., NAYDENOVA, A., DEKLEVER, R., BOONE, A.

    2006-01-30

    At the Yucca Mountain Project the Project Requirements Processing System assists in the management of relationships between regulatory and national/industry standards source criteria, and Quality Assurance Requirements and Description document (DOE/R W-0333P) requirements to create compliance matrices representing respective relationships. The matrices are submitted to the U.S. Nuclear Regulatory Commission to assist in the commission's review, interpretation, and concurrence with the Yucca Mountain Project QA program document. The tool is highly customized to meet the needs of the Office of Civilian Radioactive Waste Management Office of Quality Assurance.

  19. DBMLoc: a Database of proteins with multiple subcellular localizations

    Directory of Open Access Journals (Sweden)

    Zhou Yun

    2008-02-01

    Full Text Available Abstract Background Subcellular localization information is one of the key features to protein function research. Locating to a specific subcellular compartment is essential for a protein to function efficiently. Proteins which have multiple localizations will provide more clues. This kind of proteins may take a high proportion, even more than 35%. Description We have developed a database of proteins with multiple subcellular localizations, designated DBMLoc. The initial release contains 10470 multiple subcellular localization-annotated entries. Annotations are collected from primary protein databases, specific subcellular localization databases and literature texts. All the protein entries are cross-referenced to GO annotations and SwissProt. Protein-protein interactions are also annotated. They are classified into 12 large subcellular localization categories based on GO hierarchical architecture and original annotations. Download, search and sequence BLAST tools are also available on the website. Conclusion DBMLoc is a protein database which collects proteins with more than one subcellular localization annotation. It is freely accessed at http://www.bioinfo.tsinghua.edu.cn/DBMLoc/index.htm.

  20. Land Streamer Surveying Using Multiple Sources

    KAUST Repository

    Mahmoud, Sherif

    2014-12-11

    Various examples are provided for land streamer seismic surveying using multiple sources. In one example, among others, a method includes disposing a land streamer in-line with first and second shot sources. The first shot source is at a first source location adjacent to a proximal end of the land streamer and the second shot source is at a second source location separated by a fixed length corresponding to a length of the land streamer. Shot gathers can be obtained when the shot sources are fired. In another example, a system includes a land streamer including a plurality of receivers, a first shot source located adjacent to the proximal end of the land streamer, and a second shot source located in-line with the land streamer and the first shot source. The second shot source is separated from the first shot source by a fixed overall length corresponding to the land streamer.

  1. Integrating Multi-Source Web Records into Relational Database

    Institute of Scientific and Technical Information of China (English)

    HUANG Jianbin; JI Hongbing; SUN Heli

    2006-01-01

    How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learning of labeled samples and unlabeled database records in order to reduce the dependence on tediously hand-labeled training data. The proposed model was used to solve the problem of schema matching between data source schema and database schema. Experimental results using a large number of Web pages from diverse domains show the novel approach's effectiveness.

  2. Multiple source ground heat storage

    Science.gov (United States)

    Belzile, P.; Lamarche, L.; Rousse, D. R.

    2016-09-01

    Sharing geothermal borefields is usually done with each borehole having the same inlet conditions (flow rate, temperature and fluid). The objective of this research is to improve the energy efficiency of shared and hybrid geothermal borefields by segregating heat transfer sources. Two models are briefly presented: The first model allows the segregation of the inlet conditions for each borefields; the second model allows circuits to be defined independently for each leg of double U-tubes in a borehole. An application couples residential heat pumps and arrays of solar collectors. Independent circuits configuration gave the best energy savings in a symmetric configuration, the largest shank spacing and with solar collectors functioning all year long. The boreholes have been shortened from 300 m to 150 m in this configuration.

  3. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases.

    Science.gov (United States)

    Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-05-01

    To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  4. Multiple Source Adaptation and the Renyi Divergence

    CERN Document Server

    Mansour, Yishay; Rostamizadeh, Afshin

    2012-01-01

    This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Renyi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the p...

  5. Sparse Signal Reconstruction with Multiple Side Information using Adaptive Weights for Multiview Sources

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Seiler, Jürgen; Kaup, André

    2016-01-01

    This work considers reconstructing a target signal in a context ofdistributed sparse sources. We propose an efficient reconstruction algorithmwith the aid of other given sources as multiple side information (SI). Theproposed algorithm takes advantage of compressive sensing (CS) with SI andadaptive...... the proposedreconstruction algorithm with multiple SI using adaptive weights (RAMSIA) torobustly exploit the multiple SIs with different qualities. We experimentallyperform our algorithm on generated sparse signals and also correlated featurehistograms as multiview sparse sources from a multiview image database. Theresults...

  6. Round-robin multiple-source localization.

    Science.gov (United States)

    Mantzel, William; Romberg, Justin; Sabra, Karim G

    2014-01-01

    This paper introduces a round-robin approach for multi-source localization based on matched-field processing. Each new source location is estimated from the ambiguity function after nulling from the data vector the current source location estimates using a robust projection matrix. This projection matrix effectively minimizes mean-square energy near current source location estimates subject to a rank constraint that prevents excessive interference with sources outside of these neighborhoods. Numerical simulations are presented for multiple sources transmitting through a fixed (and presumed known) generic Pekeris ocean waveguide in the single-frequency and broadband-coherent cases that illustrate the performance of the proposed approach which compares favorably against other previously published approaches. Furthermore, the efficacy with which randomized back-propagations may also be incorporated for computational advantage is also presented.

  7. Database and selection method for portable power sources

    Energy Technology Data Exchange (ETDEWEB)

    Fu, L.; Huber, J.E. [Department of Engineering, University of Cambridge, Trumpington St., Cambridge CB2 1PZ (United Kingdom); Lu, T.J. [Department of Engineering, University of Cambridge, Trumpington St., Cambridge CB2 1PZ (United Kingdom); School of Aerospace, Xian Jiaotong University, Xian 710049 (China)

    2005-08-01

    A method for selecting power sources including batteries, fuel cells and solar cells is developed. It is based on matching the physical and performance characteristics of power sources, such as weight, volume, capacity, voltage and cost, to the requirements of the given task. Physical and performance characteristics are collated from manufacturers' data and a database is built using advanced selection software. This allows the construction of performance maps in terms of voltage, maximum current, mass energy density, volume energy density and cost, giving a systematic comparison of different kinds of power sources. The use of the method as a preliminary design tool is demonstrated in a case study on the selection of batteries for mobile phones. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  8. The Multiple-Institution Database for Investigating Engineering Longitudinal Development: An Experiential Case Study of Data Sharing and Reuse

    Science.gov (United States)

    Ohland, Matthew W.; Long, Russell A.

    2016-01-01

    Sharing longitudinal student record data and merging data from different sources is critical to addressing important questions being asked of higher education. The Multiple-Institution Database for Investigating Engineering Longitudinal Development (MIDFIELD) is a multi-institution, longitudinal, student record level dataset that is used to answer…

  9. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  10. Data-based matched-mode source localization for a moving source.

    Science.gov (United States)

    Yang, T C

    2014-03-01

    A data-based matched-mode source localization method is proposed in this paper for a moving source, using mode wavenumbers and depth functions estimated directly from the data, without requiring any environmental acoustic information and assuming any propagation model. The method is in theory free of the environmental mismatch problem because the mode replicas are estimated from the same data used to localize the source. Besides the estimation error due to the approximations made in deriving the data-based algorithms, the method has some inherent drawbacks: (1) It uses a smaller number of modes than theoretically possible because some modes are not resolved in the measurements, and (2) the depth search is limited to the depth covered by the receivers. Using simulated data, it is found that the performance degradation due to the afore-mentioned approximation/limitation is marginal compared with the original matched-mode source localization method. The proposed method has a potential to estimate the source range and depth for real data and be free of the environmental mismatch problem, noting that certain aspects of the (estimation) algorithms have previously been tested against data. The key issues are discussed in this paper.

  11. Asteroid Models from Multiple Data Sources

    CERN Document Server

    Durech, J; Delbo, M; Kaasalainen, M; Viikinkoski, M

    2015-01-01

    In the past decade, hundreds of asteroid shape models have been derived using the lightcurve inversion method. At the same time, a new framework of 3-D shape modeling based on the combined analysis of widely different data sources such as optical lightcurves, disk-resolved images, stellar occultation timings, mid-infrared thermal radiometry, optical interferometry, and radar delay-Doppler data, has been developed. This multi-data approach allows the determination of most of the physical and surface properties of asteroids in a single, coherent inversion, with spectacular results. We review the main results of asteroid lightcurve inversion and also recent advances in multi-data modeling. We show that models based on remote sensing data were confirmed by spacecraft encounters with asteroids, and we discuss how the multiplication of highly detailed 3-D models will help to refine our general knowledge of the asteroid population. The physical and surface properties of asteroids, i.e., their spin, 3-D shape, densit...

  12. Verification of road databases using multiple road models

    Science.gov (United States)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  13. 46 CFR 111.10-5 - Multiple energy sources.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating...

  14. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...

  15. Spontaneous Sourcing among Students Reading Multiple Documents

    Science.gov (United States)

    Stromso, Helge I.; Braten, Ivar; Britt, M. Anne; Ferguson, Leila E.

    2013-01-01

    This study used think-aloud methodology to explore undergraduates' spontaneous attention to and use of source information while reading six documents that presented conflicting views on a controversial social scientific issue in a Google-like environment. Results showed that students explicitly and implicitly paid attention to sources of documents…

  16. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database.

    Science.gov (United States)

    Scofield, Patricia A; Smith, Linda L; Johnson, David N

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.

  17. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  18. Combining Multiple Knowledge Sources for Discourse Segmentation

    CERN Document Server

    Litman, D J; Litman, Diane J.; Passonneau, Rebecca J.

    1995-01-01

    We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing segmentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an independent test set (both methods), and using cross-validation (machine learning).

  19. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Science.gov (United States)

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  20. Multiple Cosmic Sources for Meteorite Macromolecules?

    Science.gov (United States)

    Sephton, Mark A; Watson, Jonathan S; Meredith, William; Love, Gordon D; Gilmour, Iain; Snape, Colin E

    2015-10-01

    The major organic component in carbonaceous meteorites is an organic macromolecular material. The Murchison macromolecular material comprises aromatic units connected by aliphatic and heteroatom-containing linkages or occluded within the wider structure. The macromolecular material source environment remains elusive. Traditionally, attempts to determine source have strived to identify a single environment. Here, we apply a highly efficient hydrogenolysis method to liberate units from the macromolecular material and use mass spectrometric techniques to determine their chemical structures and individual stable carbon isotope ratios. We confirm that the macromolecular material comprises a labile fraction with small aromatic units enriched in (13)C and a refractory fraction made up of large aromatic units depleted in (13)C. Our findings suggest that the macromolecular material may be derived from at least two separate environments. Compound-specific carbon isotope trends for aromatic compounds with carbon number may reflect mixing of the two sources. The story of the quantitatively dominant macromolecular material in meteorites appears to be made up of more than one chapter.

  1. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  2. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  3. Classification of Mars Terrain Using Multiple Data Sources

    Data.gov (United States)

    National Aeronautics and Space Administration — Classification of Mars Terrain Using Multiple Data Sources Alan Kraut1, David Wettergreen1 ABSTRACT. Images of Mars are being collected faster than they can be...

  4. European database on indoor air pollution sources in buildings: Current status of database structure and software

    NARCIS (Netherlands)

    Molina, J.L.; Clausen, G.H.; Saarela, K.; Plokker, W.; Bluyssen, P.M.; Bishop, W.; Oliveira Fernandes, E. de

    1996-01-01

    the European Joule II Project European Data Base for Indoor Air Pollution Sources in Buildings. The aim of the project is to produce a tool which would be used by designers to take into account the actual pollution of the air from the building elements and ventilation and air conditioning system com

  5. European database on indoor air pollution sources in buildings: Current status of database structure and software

    NARCIS (Netherlands)

    Molina, J.L.; Clausen, G.H.; Saarela, K.; Plokker, W.; Bluyssen, P.M.; Bishop, W.; Oliveira Fernandes, E. de

    1996-01-01

    the European Joule II Project European Data Base for Indoor Air Pollution Sources in Buildings. The aim of the project is to produce a tool which would be used by designers to take into account the actual pollution of the air from the building elements and ventilation and air conditioning system

  6. Multiple Objective Fuzzy Sourcing Problem with Multiple Items in Discount Environments

    OpenAIRE

    Feyzan Arikan

    2015-01-01

    The selection of proper supply sources plays a vital role to maintain companies’ competitiveness. In this study a multiple criteria fuzzy sourcing problem with multiple items in discount environment is considered as a multiple objective mixed integer linear programming problem. Fuzzy parameters are demand level and/or aspiration levels of objectives. Three objective functions are minimization of the total production and ordering costs, the total number of rejected units, and the total number ...

  7. A Database of Phase Calibration Sources and their Radio Spectra for the Giant Metrewave Radio Telescope

    CERN Document Server

    Lal, Dharam V; Sherkar, Sachin S

    2016-01-01

    We are pursuing a project to build a database of phase calibration sources suitable for Giant Metrewave Radio Telescope (GMRT). Here we present the first release of 45 low frequency calibration sources at 235 MHz and 610 MHz. These calibration sources are broadly divided into quasars, radio galaxies and unidentified sources. We provide their flux densities, models for calibration sources, (u,v) plots, final deconvolved restored maps and clean-component lists/files for use in the Astronomical Image Processing System (AIPS) and the Common Astronomy Software Applications (CASA). We also assign a quality factor to each of the calibration sources. These data products are made available online through the GMRT observatory website. In addition we find that (i) these 45 low frequency calibration sources are uniformly distributed in the sky and future efforts to increase the size of the database should populate the sky further, (ii) spectra of these calibration sources are about equally divided between straight, curve...

  8. GaussDal: An open source database management system for quantum chemical computations

    Science.gov (United States)

    Alsberg, Bjørn K.; Bjerke, Håvard; Navestad, Gunn M.; Åstrand, Per-Olof

    2005-09-01

    An open source software system called GaussDal for management of results from quantum chemical computations is presented. Chemical data contained in output files from different quantum chemical programs are automatically extracted and incorporated into a relational database (PostgreSQL). The Structural Query Language (SQL) is used to extract combinations of chemical properties (e.g., molecules, orbitals, thermo-chemical properties, basis sets etc.) into data tables for further data analysis, processing and visualization. This type of data management is particularly suited for projects involving a large number of molecules. In the current version of GaussDal, parsers for Gaussian and Dalton output files are supported, however future versions may also include parsers for other quantum chemical programs. For visualization and analysis of generated data tables from GaussDal we have used the locally developed open source software SciCraft. Program summaryTitle of program: GaussDal Catalogue identifier: ADVT Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVT Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Any Operating system under which the system has been tested: Linux Programming language used: Python Memory required to execute with typical data: 256 MB No. of bits in word: 32 or 64 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of lines in distributed program, including test data, etc: 543 531 No. of bytes in distribution program, including test data, etc: 7 718 121 Distribution format: tar.gzip file Nature of physical problem: Handling of large amounts of data from quantum chemistry computations. Method of solution: Use of SQL based database and quantum chemistry software specific parsers. Restriction on the complexity of the problem: Program is currently limited to Gaussian and Dalton output, but expandable to other formats. Generates subsets of multiple data tables from

  9. Development of fully Bayesian multiple-time-window source inversion

    Science.gov (United States)

    Kubo, Hisahiko; Asano, Kimiyuki; Iwata, Tomotaka; Aoi, Shin

    2016-03-01

    In the estimation of spatiotemporal slip models, kinematic source inversions using Akaike's Bayesian Information Criterion (ABIC) and the multiple-time-window method have often been used. However, there are cases in which conventional ABIC-based source inversions do not work well in the determination of hyperparameters when a non-negative slip constraint is used. In order to overcome this problem, a new source inversion method was developed in this study. The new method introduces a fully Bayesian method into the kinematic multiple-time-window source inversion. The multiple-time-window method is one common way of parametrizing a source time function and is highly flexible in terms of the shape of the source time function. The probability distributions of model parameters and hyperparameters can be directly obtained by using the Markov chain Monte Carlo method. These probability distributions are useful for simply evaluating the uniqueness and reliability of the derived model, which is another advantage of a fully Bayesian method. This newly developed source inversion method was applied to the 2011 Ibaraki-oki, Japan, earthquake (Mw 7.9) to demonstrate its usefulness. It was demonstrated that the problem with using the conventional ABIC-based source inversion method for hyperparameter determination appeared in the spatiotemporal source inversion of this event and that the newly developed source inversion could overcome this problem.

  10. Multivariate-adjusted pharmacoepidemiologic analyses of confidential information pooled from multiple health care utilization databases.

    Science.gov (United States)

    Rassen, Jeremy A; Avorn, Jerry; Schneeweiss, Sebastian

    2010-08-01

    Mandated post-marketing drug safety studies require vast databases pooled from multiple administrative data sources which can contain private and proprietary information. We sought to create a method to conduct pooled analyses while keeping information private and allowing for full confounder adjustment. We propose a method based on propensity score (PS) techniques. A set of propensity scores are computed in each data-contributing center and a PS-adjusted analysis is then carried out on a pooled basis. The method is demonstrated in a study of the potentially negative effects of concurrent initiation of clopidogrel and proton pump inhibitors (PPIs) in four cohorts of patients assembled from North American claims data sources. Clinical outcomes were myocardial infarction (MI) hospitalization and hospitalization for revascularization procedure. Success of the method was indicated by equivalent performance of our PS-based method and traditional confounder adjustment. We also implemented and evaluated high-dimensional propensity scores and meta-analytic techniques. On both a pooled and individual cohort basis, we saw substantially similar point estimates and confidence intervals for studies adjusted by covariates and from privacy-maintaining propensity scores. The pooled, adjusted OR for MI hospitalization was 1.20 (95% confidence interval 1.03, 1.41) with individual variable adjustment and 1.16 (1.00, 1.36) with PS adjustment. The revascularization OR estimates differed by analysis and pooling yielded substantially similar results. We observed little difference in point estimates when we employed standard techniques or the proposed privacy-maintaining pooling method. We would recommend the technique in instances where multi-center studies require both privacy and multivariate adjustment. 2010 John Wiley & Sons, Ltd.

  11. Cold Atom Source Containing Multiple Magneto-Optical Traps

    Science.gov (United States)

    Ramirez-Serrano, Jaime; Kohel, James; Kellogg, James; Lim, Lawrence; Yu, Nan; Maleki, Lute

    2007-01-01

    An apparatus that serves as a source of a cold beam of atoms contains multiple two-dimensional (2D) magneto-optical traps (MOTs). (Cold beams of atoms are used in atomic clocks and in diverse scientific experiments and applications.) The multiple-2D-MOT design of this cold atom source stands in contrast to single-2D-MOT designs of prior cold atom sources of the same type. The advantages afforded by the present design are that this apparatus is smaller than prior designs.

  12. Transient visual evoked neuromagnetic responses: Identification of multiple sources

    Energy Technology Data Exchange (ETDEWEB)

    Aine, C.; George, J.; Medvick, P.; Flynn, E.; Bodis-Wollner, I.; Supek, S.

    1989-01-01

    Neuromagnetic measurements and associated modeling procedures must be able to resolve multiple sources in order to localize and accurately characterize the generators of visual evoked neuromagnetic activity. Workers have identified at least 11 areas in the macaque, throughout occipital, parietal, and temporal cortex, which are primarily or entirely visual in function. The surface area of the human occipital lobe is estimated to be 150--250cm. Primary visual cortex covers approximately 26cm/sup 2/ while secondary visual areas comprise the remaining area. For evoked response amplitudes typical of human MEG data, one report estimates that a two-dipole field may be statistically distinguishable from that of a single dipole when the separation is greater than 1--2 cm. Given the estimated expanse of cortex devoted to visual processes, along with this estimate of resolution limits it is likely that MEG can resolve sources associated with activity in multiple visual areas. Researchers have noted evidence for the existence of multiple sources when presenting visual stimuli in a half field; however, they did not attempt to localize them. We have examined numerous human MEG field patterns resulting from different visual field placements of a small sinusoidal grating which suggest the existence of multiple sources. The analyses we have utilized for resolving multiple sources in these studies differ depending on whether there was evidence of (1) synchronous activation of two spatially discrete sources or (2) two discrete asynchronous sources. In some cases we have observed field patterns which appear to be adequately explained by a single source changing its orientation and location across time. 4 refs., 2 figs.

  13. Database of potential sources for earthquakes larger than M 5.5 in Italy. Version 2.0-2001

    Energy Technology Data Exchange (ETDEWEB)

    Valensise, G.; Pantatosti, D. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy)

    2001-07-01

    This volume presents and describes Version 2.0 of the Database of potential sources for earthquakes larger than M 5.5 in Italy (also referred to as DISS, the acronym of the short form database of Italy's seismogenic sources, or simply as database) that was first conceived at the Istituto Nazionale di Geofisica e Vulcanologia of Rome in 1996.

  14. Portable source initiative for distribution of cross-platform compatible multispectral databases

    Science.gov (United States)

    Nichols, William K.

    2003-09-01

    In response to popular demand, The Visual Group has undertaken an effort known as the Portable Source Initiative, a program intended to create cross-platform compatible multi-spectral databases by building a managed source set of data and expending a minimal amount of effort republishing run-time formatted proprietary databases. By building visual and sensor databases using a variety of sources, then feeding all value-added work back into standard, open, widely used source formats, databases can be published from this "refined source data" in a relatively simple, automated, and repeatable fashion. To this end, with the endorsement of the Army's PM Cargo, we have offered a sample set of the source data we are building for the CH-47F TFPS program to the visual simulation industry at large to be republished into runtime formats. The results of this effort were an overwhelming acceptance of the concepts and theories within, and support from both industry and multi-service flight simulation teams.

  15. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    Science.gov (United States)

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  16. Common source-multiple load vs. separate source-individual load photovoltaic system

    Science.gov (United States)

    Appelbaum, Joseph

    1989-01-01

    A comparison of system performance is made for two possible system setups: (1) individual loads powered by separate solar cell sources; and (2) multiple loads powered by a common solar cell source. A proof for resistive loads is given that shows the advantage of a common source over a separate source photovoltaic system for a large range of loads. For identical loads, both systems perform the same.

  17. Automatic Modulation Recognition of Mixed Multiple Source Signals

    Science.gov (United States)

    Tan, Xiaobo; Zhang, Hang; Lu, Wei

    2011-03-01

    In this paper, automatic modulation recognition of mixed multiple source signals is discussed. At first, the algorithm of equivariant adaptive source separation (EASI) is employed to separate signals from their mixed waveforms. Four features of five modulated signals are extracted and then two classifiers, decision tree and neutral network are used to complete modulation classification. The effects of symbol shaping on features extraction and validation of source separation are also investigated. Simulations show that the average probability of correct recognition of the classifiers is very depended on the performance of source separation. When SNR (Signal to Noise Ratio) is larger than 15 dB and the number of mixed source signals is less than 4, the average probability of correct recognition is above 0.6 for decision tree classifier and 0.63 for neutral network classifier. Simulations and discussions about automatic modulation recognition for source signals surfed Rayleigh flat fading are also presented.

  18. Resolving the problem of multiple accessions of the same transcript deposited across various public databases.

    Science.gov (United States)

    Weirick, Tyler; John, David; Uchida, Shizuka

    2017-03-01

    Maintaining the consistency of genomic annotations is an increasingly complex task because of the iterative and dynamic nature of assembly and annotation, growing numbers of biological databases and insufficient integration of annotations across databases. As information exchange among databases is poor, a 'novel' sequence from one reference annotation could be annotated in another. Furthermore, relationships to nearby or overlapping annotated transcripts are even more complicated when using different genome assemblies. To better understand these problems, we surveyed current and previous versions of genomic assemblies and annotations across a number of public databases containing long noncoding RNA. We identified numerous discrepancies of transcripts regarding their genomic locations, transcript lengths and identifiers. Further investigation showed that the positional differences between reference annotations of essentially the same transcript could lead to differences in its measured expression at the RNA level. To aid in resolving these problems, we present the algorithm 'Universal Genomic Accession Hash (UGAHash)' and created an open source web tool to encourage the usage of the UGAHash algorithm. The UGAHash web tool (http://ugahash.uni-frankfurt.de) can be accessed freely without registration. The web tool allows researchers to generate Universal Genomic Accessions for genomic features or to explore annotations deposited in the public databases of the past and present versions. We anticipate that the UGAHash web tool will be a valuable tool to check for the existence of transcripts before judging the newly discovered transcripts as novel. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Integrating multiple genome annotation databases improves the interpretation of microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Kennedy Breandan

    2010-01-01

    Full Text Available Abstract Background The Affymetrix GeneChip is a widely used gene expression profiling platform. Since the chips were originally designed, the genome databases and gene definitions have been considerably updated. Thus, more accurate interpretation of microarray data requires parallel updating of the specificity of GeneChip probes. We propose a new probe remapping protocol, using the zebrafish GeneChips as an example, by removing nonspecific probes, and grouping the probes into transcript level probe sets using an integrated zebrafish genome annotation. This genome annotation is based on combining transcript information from multiple databases. This new remapping protocol, especially the new genome annotation, is shown here to be an important factor in improving the interpretation of gene expression microarray data. Results Transcript data from the RefSeq, GenBank and Ensembl databases were downloaded from the UCSC genome browser, and integrated to generate a combined zebrafish genome annotation. Affymetrix probes were filtered and remapped according to the new annotation. The influence of transcript collection and gene definition methods was tested using two microarray data sets. Compared to remapping using a single database, this new remapping protocol results in up to 20% more probes being retained in the remapping, leading to approximately 1,000 more genes being detected. The differentially expressed gene lists are consequently increased by up to 30%. We are also able to detect up to three times more alternative splicing events. A small number of the bioinformatics predictions were confirmed using real-time PCR validation. Conclusions By combining gene definitions from multiple databases, it is possible to greatly increase the numbers of genes and splice variants that can be detected in microarray gene expression experiments.

  20. Bit-Based Joint Source-Channel Decoding of Huffman Encoded Markov Multiple Sources

    Directory of Open Access Journals (Sweden)

    Weiwei Xiang

    2010-04-01

    Full Text Available Multimedia transmission over time-varying channels such as wireless channels has recently motivated the research on the joint source-channel technique. In this paper, we present a method for joint source-channel soft decision decoding of Huffman encoded multiple sources. By exploiting the a priori bit probabilities in multiple sources, the decoding performance is greatly improved. Compared with the single source decoding scheme addressed by Marion Jeanne, the proposed technique is more practical in wideband wireless communications. Simulation results show our new method obtains substantial improvements with a minor increasing of complexity. For two sources, the gain in SNR is around 1.5dB by using convolutional codes when symbol-error rate (SER reaches 10-2 and around 2dB by using Turbo codes.

  1. A database of phase calibration sources and their radio spectra for the Giant Metrewave Radio Telescope

    Science.gov (United States)

    Lal, Dharam V.; Dubal, Shilpa S.; Sherkar, Sachin S.

    2016-12-01

    We are pursuing a project to build a database of phase calibration sources suitable for Giant Metrewave Radio Telescope (GMRT). Here we present the first release of 45 low frequency calibration sources at 235 MHz and 610 MHz. These calibration sources are broadly divided into quasars, radio galaxies and unidentified sources. We provide their flux densities, models for calibration sources, ( u, v) plots, final deconvolved restored maps and clean-component lists/files for use in the Astronomical Image Processing System ( aips) and the Common Astronomy Software Applications ( casa). We also assign a quality factor to each of the calibration sources. These data products are made available online through the GMRT observatory website. In addition we find that (i) these 45 low frequency calibration sources are uniformly distributed in the sky and future efforts to increase the size of the database should populate the sky further, (ii) spectra of these calibration sources are about equally divided between straight, curved and complex shapes, (iii) quasars tend to exhibit flatter radio spectra as compared to the radio galaxies or the unidentified sources, (iv) quasars are also known to be radio variable and hence possibly show complex spectra more frequently, and (v) radio galaxies tend to have steeper spectra, which are possibly due to the large redshifts of distant galaxies causing the shift of spectrum to lower frequencies.

  2. A database of phase calibration sources and their radio spectra for the Giant Metrewave Radio Telescope

    Science.gov (United States)

    Lal, Dharam V.; Dubal, Shilpa S.; Sherkar, Sachin S.

    2016-10-01

    We are pursuing a project to build a database of phase calibration sources suitable for Giant Metrewave Radio Telescope (GMRT). Here we present the first release of 45 low frequency calibration sources at 235 MHz and 610 MHz. These calibration sources are broadly divided into quasars, radio galaxies and unidentified sources. We provide their flux densities, models for calibration sources, (u,v) plots, final deconvolved restored maps and uc(clean)-component lists/files for use in the Astronomical Image Processing System (uc(aips)) and the Common Astronomy Software Applications (uc(casa)). We also assign a quality factor to each of the calibration sources. These data products are made available online through the GMRT observatory website. In addition we find that (i) these 45 low frequency calibration sources are uniformly distributed in the sky and future efforts to increase the size of the database should populate the sky further, (ii) spectra of these calibration sources are about equally divided between straight, curved and complex shapes, (iii) quasars tend to exhibit flatter radio spectra as compared to the radio galaxies or the unidentified sources, (iv) quasars are also known to be radio variable and hence possibly show complex spectra more frequently, and (v) radio galaxies tend to have steeper spectra, which are possibly due to the large redshifts of distant galaxies causing the shift of spectrum to lower frequencies.

  3. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  4. Prediction of biological targets for compounds using multiple-category Bayesian models trained on chemogenomics databases.

    Science.gov (United States)

    Nidhi; Glick, Meir; Davies, John W; Jenkins, Jeremy L

    2006-01-01

    Target identification is a critical step following the discovery of small molecules that elicit a biological phenotype. The present work seeks to provide an in silico correlate of experimental target fishing technologies in order to rapidly fish out potential targets for compounds on the basis of chemical structure alone. A multiple-category Laplacian-modified naïve Bayesian model was trained on extended-connectivity fingerprints of compounds from 964 target classes in the WOMBAT (World Of Molecular BioAcTivity) chemogenomics database. The model was employed to predict the top three most likely protein targets for all MDDR (MDL Drug Database Report) database compounds. On average, the correct target was found 77% of the time for compounds from 10 MDDR activity classes with known targets. For MDDR compounds annotated with only therapeutic or generic activities such as "antineoplastic", "kinase inhibitor", or "anti-inflammatory", the model was able to systematically deconvolute the generic activities to specific targets associated with the therapeutic effect. Examples of successful deconvolution are given, demonstrating the usefulness of the tool for improving knowledge in chemogenomics databases and for predicting new targets for orphan compounds.

  5. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  6. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    stream_size 10 stream_content_type text/plain stream_name Encyclopedia_Coastal_Sci_2005_1145.pdf.txt stream_source_info Encyclopedia_Coastal_Sci_2005_1145.pdf.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset...

  7. Optimal Rate Allocation Algorithm for Multiple Source Video Streaming

    Institute of Scientific and Technical Information of China (English)

    戢彦泓; 郭常杰; 钟玉琢; 孙立峰

    2004-01-01

    Video streaming is one of the most important applications used in the best-effort Internet.This paper presents a new scheme for multiple source video streaming in which the traditional fine granular scalable coding was rebuilt into a multiple sub-streams based transmission model.A peak signal to noise ratio based stream rate allocation algorithm was then developed based on the transmission model.In tests,the algorithm performance is about 1 dB higher than that of a uniform rate allocation algorithm.Therefore,this scheme can overcome bottlenecks along a single link and smooth jitter to achieve high quality and stable video.

  8. Multisensory softness perceived compliance from multiple sources of information

    CERN Document Server

    Luca, Massimiliano Di

    2014-01-01

    Offers a unique multidisciplinary overview of how humans interact with soft objects and how multiple sensory signals are used to perceive material properties, with an emphasis on object deformability. The authors describe a range of setups that have been employed to study and exploit sensory signals involved in interactions with compliant objects as well as techniques to simulate and modulate softness - including a psychophysical perspective of the field. Multisensory Softness focuses on the cognitive mechanisms underlying the use of multiple sources of information in softness perception. D

  9. DABAM: an open-source database of X-ray mirrors metrology

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel, E-mail: srio@esrf.eu [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Bianchi, Davide [AC2T Research GmbH, Viktro-Kaplan-Strasse 2-C, 2700 Wiener Neustadt (Austria); Cocco, Daniele [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Glass, Mark [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Idir, Mourad [NSLS II, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Metz, Jim [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Raimondi, Lorenzo; Rebuffi, Luca [Elettra-Sincrotrone Trieste SCpA, Basovizza (TS) (Italy); Reininger, Ruben; Shi, Xianbo [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Siewert, Frank [BESSY II, Helmholtz Zentrum Berlin, Institute for Nanometre Optics and Technology, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Spielmann-Jaeggi, Sibylle [Swiss Light Source at Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Takacs, Peter [Instrumentation Division, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Tomasset, Muriel [Synchrotron Soleil (France); Tonnessen, Tom [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Vivo, Amparo [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Yashchuk, Valeriy [Advanced Light Source, Lawrence Berkeley National Laboratory, MS 15-R0317, 1 Cyclotron Road, Berkeley, CA 94720-8199 (United States)

    2016-04-20

    DABAM, an open-source database of X-ray mirrors metrology to be used with ray-tracing and wave-propagation codes for simulating the effect of the surface errors on the performance of a synchrotron radiation beamline. An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  10. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  11. MIDAS (Material Implementation, Database, and Analysis Source): A comprehensive resource of material properties

    Energy Technology Data Exchange (ETDEWEB)

    Tang, M; Norquist, P; Barton, N; Durrenberger, K; Florando, J; Attia, A

    2010-12-13

    MIDAS is aimed to be an easy-to-use and comprehensive common source for material properties including both experimental data and models and their parameters. At LLNL, we will develop MIDAS to be the central repository for material strength related data and models with the long-term goal to encompass other material properties. MIDAS will allow the users to upload experimental data and updated models, to view and read materials data and references, to manipulate models and their parameters, and to serve as the central location for the application codes to access the continuously growing model source codes. MIDAS contains a suite of interoperable tools and utilizes components already existing at LLNL: MSD (material strength database), MatProp (database of materials properties files), and MSlib (library of material model source codes). MIDAS requires significant development of the computer science framework for the interfaces between different components. We present the current status of MIDAS and its future development in this paper.

  12. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  13. Efficient quantum transmission in multiple-source networks.

    Science.gov (United States)

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-04-02

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.

  14. How to support learning from multiple hypertext sources.

    Science.gov (United States)

    Naumann, Anja B; Wechsung, Ina; Krems, Josef F

    2009-08-01

    In the present study, we investigated three factors that were assumed to have a significant influence on the success of learning from multiple hypertexts, and on the construction of a documents model in particular. These factors were task (argumentative vs. narrative), available text material (with vs. without primary sources), and presentation format (active vs. static). The study was conducted with the help of the combination of three tools (DEWEX, Chemnitz LogAnalyzer, and SummTool) developed for Web-based experimenting. The results show that the task is the most important factor for successful learning from multiple hypertexts. Depending on the task, the participants were either able or unable to apply adequate strategies, such as considering the source information. It was also observed that argumentative tasks were supported by an active hypertext presentation format, whereas performance on narrative tasks increased with a passive presentation format. No effect was shown for the type of texts available.

  15. Probabilistic inference of transcription factor binding from multiple data sources.

    Directory of Open Access Journals (Sweden)

    Harri Lähdesmäki

    Full Text Available An important problem in molecular biology is to build a complete understanding of transcriptional regulatory processes in the cell. We have developed a flexible, probabilistic framework to predict TF binding from multiple data sources that differs from the standard hypothesis testing (scanning methods in several ways. Our probabilistic modeling framework estimates the probability of binding and, thus, naturally reflects our degree of belief in binding. Probabilistic modeling also allows for easy and systematic integration of our binding predictions into other probabilistic modeling methods, such as expression-based gene network inference. The method answers the question of whether the whole analyzed promoter has a binding site, but can also be extended to estimate the binding probability at each nucleotide position. Further, we introduce an extension to model combinatorial regulation by several TFs. Most importantly, the proposed methods can make principled probabilistic inference from multiple evidence sources, such as, multiple statistical models (motifs of the TFs, evolutionary conservation, regulatory potential, CpG islands, nucleosome positioning, DNase hypersensitive sites, ChIP-chip binding segments and other (prior sequence-based biological knowledge. We developed both a likelihood and a Bayesian method, where the latter is implemented with a Markov chain Monte Carlo algorithm. Results on a carefully constructed test set from the mouse genome demonstrate that principled data fusion can significantly improve the performance of TF binding prediction methods. We also applied the probabilistic modeling framework to all promoters in the mouse genome and the results indicate a sparse connectivity between transcriptional regulators and their target promoters. To facilitate analysis of other sequences and additional data, we have developed an on-line web tool, ProbTF, which implements our probabilistic TF binding prediction method using multiple

  16. Freshwater Biological Traits Database (Traits)

    Science.gov (United States)

    The traits database was compiled for a project on climate change effects on river and stream ecosystems. The traits data, gathered from multiple sources, focused on information published or otherwise well-documented by trustworthy sources.

  17. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    Science.gov (United States)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via

  18. Quantum Query Complexity for Searching Multiple Marked States from an Unsorted Database

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    An important and usual sort of search problems is to find all marked states from an unsorted database with a large number of states. Grover's original quantum search algorithm is for finding single marked state with uncertainty, and it has been generalized to the case of multiple marked states, as well as been modified to find single marked state with certainty. However, the query complexity for finding all multiple marked states has not been addressed. We use a generalized Long's algorithm with high precision to solve such a problem. We calculate the approximate query complexity, which increases with the number of marked states and with the precision that we demand. In the end we introduce an algorithm for the problem on a "duality computer" and show its advantage over other algorithms.

  19. Multiple approaches to microbial source tracking in tropical northern Australia

    KAUST Repository

    Neave, Matthew

    2014-09-16

    Microbial source tracking is an area of research in which multiple approaches are used to identify the sources of elevated bacterial concentrations in recreational lakes and beaches. At our study location in Darwin, northern Australia, water quality in the harbor is generally good, however dry-season beach closures due to elevated Escherichia coli and enterococci counts are a cause for concern. The sources of these high bacteria counts are currently unknown. To address this, we sampled sewage outfalls, other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall (Larrakeyah discharge) was a source of bacteria, including fecal bacteria that impacted nearby beaches. Two other treated effluent discharges did not appear to influence sites other than those directly adjacent. Several beaches contained fecal indicator bacteria that likely originated from urban rivers and creeks within the catchment. Generally, connectivity between the sites was observed within distinct geographical locations and it appeared that most of the bacterial contamination on Darwin beaches was confined to local sources.

  20. Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory

    Science.gov (United States)

    Green, J. Wayne

    1991-01-01

    This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.

  1. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    Science.gov (United States)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  2. jSPyDB, an open source database-independent tool for data management

    Science.gov (United States)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.

  3. Handling multiple testing while interpreting microarrays with the Gene Ontology Database

    Directory of Open Access Journals (Sweden)

    Zhao Hongyu

    2004-09-01

    Full Text Available Abstract Background The development of software tools that analyze microarray data in the context of genetic knowledgebases is being pursued by multiple research groups using different methods. A common problem for many of these tools is how to correct for multiple statistical testing since simple corrections are overly conservative and more sophisticated corrections are currently impractical. A careful study of the nature of the distribution one would expect by chance, such as by a simulation study, may be able to guide the development of an appropriate correction that is not overly time consuming computationally. Results We present the results from a preliminary study of the distribution one would expect for analyzing sets of genes extracted from Drosophila, S. cerevisiae, Wormbase, and Gramene databases using the Gene Ontology Database. Conclusions We found that the estimated distribution is not regular and is not predictable outside of a particular set of genes. Permutation-based simulations may be necessary to determine the confidence in results of such analyses.

  4. A New Database on Physical Capital Stock: Sources, Methodology, and Results A New Database on Physical Capital Stock: Sources, Methodology, and Results

    Directory of Open Access Journals (Sweden)

    Ashok Dhareshwar

    1993-03-01

    Full Text Available A New Database on Physical Capital Stock: Sources, Methodology, and Results This paper describes the derivation of a new database of physical capital stock estimates for a selected group of 92 developing and industrial countries from 1960 to 1990 (of which 68 are from develouing countries. This work is part of a large research effort to analyze the sources of growth in developing countries and assess the effects of changes in the international economic environment on developing countries prospects. A special effort was made to compile investment series from 1950 onward for as many countries as possible and these were then aggregated according to a perpetual inventory method. In addition, various techniques were evaluated for the estimation of an initial capital stock and a modified Harberger approach was considered most suitable. The derived capital stock series prepared for this paper were compared to other capital stock series prepared by other researchers. The tests show the series correlated well with the results of most other similar exercises. The capital stock estimates were used to calculate median-capital output ratios and aggregate growth rates of the capital stock by country group; again, the results seem reasonable and consistent with our prior understanding of the relevant developing regions. Finally, the capital stock series were used to calculate total factor productivity growth rates using a constant returns to scale Cobb-Douglas production function with imposed factor shares. The results of this exercise were presented both on a regional basis as well as against per capita income in 1960 and were found to be consistent with the findings of other researchers.

  5. On the "Dependence" of "Independent" Group EEG Sources; an EEG Study on Two Large Databases.

    OpenAIRE

    Congedo, Marco; John, Roy; RIDDER, Dirk De; Prichep, Leslie; Isenhart, Robert

    2010-01-01

    International audience; The aim of this work is to study the coherence profile (dependence) of robust eyes-closed resting EEG sources isolated by group blind source separation (gBSS). We employ a test-retest strategy using two large sample normative databases (N = 57 and 84). Using a BSS method in the complex Fourier domain, we show that we can rigourously study the out-of-phase dependence of the extracted components, albeit they are extracted so as to be in-phase independent (by BSS definiti...

  6. Multiple Objective Fuzzy Sourcing Problem with Multiple Items in Discount Environments

    Directory of Open Access Journals (Sweden)

    Feyzan Arikan

    2015-01-01

    Full Text Available The selection of proper supply sources plays a vital role to maintain companies’ competitiveness. In this study a multiple criteria fuzzy sourcing problem with multiple items in discount environment is considered as a multiple objective mixed integer linear programming problem. Fuzzy parameters are demand level and/or aspiration levels of objectives. Three objective functions are minimization of the total production and ordering costs, the total number of rejected units, and the total number of late delivered units, respectively. The model is developed for the all-units discount scheme. For the incremental discount and volume discount environment, modification requirements of the model are mentioned. The previously proposed interactive fuzzy approach combined with three fuzzy mathematical models is employed to obtain most satisfactory solution which is also a nondominated one. This study provides a realistic mathematical model and promising solution strategy to multiple item-single period sourcing problem in discount environment. Consideration of fuzziness makes the obtained nondominated solution implementable for the real cases.

  7. Dynamic knowledge management from multiple sources in crowdsourcing environments

    Science.gov (United States)

    Kim, Mucheol; Rho, Seungmin

    2015-10-01

    Due to the spread of smart devices and the development of network technology, a large number of people can now easily utilize the web for acquiring information and various services. Further, collective intelligence has emerged as a core player in the evolution of technology in web 2.0 generation. It means that people who are interested in a specific domain of knowledge can not only make use of the information, but they can also participate in the knowledge production processes. Since a large volume of knowledge is produced by multiple contributors, it is important to integrate and manage knowledge efficiently. In this paper, we propose a social tagging-based dynamic knowledge management system in crowdsourcing environments. The approach here is to categorize and package knowledge from multiple sources, in such a way that it easily links to target knowledge.

  8. Multiple-Source Shortest Paths in Embedded Graphs

    CERN Document Server

    Cabello, Sergio; Erickson, Jeff

    2012-01-01

    Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let F be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of F to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the O(n log n)-time algorithm of Klein [SODA 2005] for multiple-source shortest paths in planar graphs. Intuitively, our preprocessing algorithm maintains a shortest-path tree as its source point moves continuously around the boundary of F. As an application of our algorithm, we describe algorithms to compute a shortest non-contractible or non-separating cycle in embedded, undirected graphs in O(g^2 n log n) time.

  9. Collaborative mining of graph patterns from multiple sources

    Science.gov (United States)

    Levchuk, Georgiy; Colonna-Romanoa, John

    2016-05-01

    Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.

  10. jSPyDB, an open source database-independent tool for data management

    CERN Document Server

    Pierro, Giuseppe Antonio

    2010-01-01

    Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different Database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. ...

  11. DABAM: an open-source database of X-ray mirrors metrology.

    Science.gov (United States)

    Sanchez Del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy

    2016-05-01

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  12. DABAM: an open-source database of X-ray mirrors metrology

    Science.gov (United States)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy

    2016-01-01

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database. PMID:27140145

  13. DABAM: an open-source database of X-ray mirrors metrology

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy

    2016-04-20

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  14. Mood and multiple source characteristics: mood congruency of source consensus status and source trustworthiness as determinants of message scrutiny.

    Science.gov (United States)

    Ziegler, Rene; Diehl, Michael

    2011-08-01

    This research deals with the interplay of mood and multiple source characteristics in regard to persuasion processes and attitudes. In a four-factorial experiment, mood (positive vs. negative), source consensus status (majority vs. minority), source trustworthiness (high vs. low), and message strength (strong vs. weak) were manipulated. Results were in line with predictions of a mood-congruent expectancies perspective rather than competing predictions of a mood-as-information perspective. Specifically, individuals in both moods evinced higher message scrutiny given mood-incongruent (vs. mood-congruent) source characteristics. That is, across source trustworthiness, positive (negative) mood led to higher message scrutiny given a minority (majority) versus a majority (minority) source. Furthermore, across source consensus, positive (negative) mood led to higher message scrutiny given an untrustworthy (trustworthy) versus a trustworthy (untrustworthy) source. Additional analyses revealed that processing effort increased from doubly mood-congruent source combinations (low effort) over mixed-source combinations (intermediate effort) to doubly mood-incongruent combinations (high effort). Implications are discussed.

  15. ZeBase: an open-source relational database for zebrafish laboratories.

    Science.gov (United States)

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  16. Assessment of the neutron cross section database for mercury for the ORNL spallation source

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Spencer, R.R.; Ingersoll, D.T.; Gabriel, T.A. [Oak Ridge National Lab., TN (United States)

    1996-06-01

    Neutron source generation based on a high energy particle accelerator has been considered as an alternative to the canceled Advanced Neutron Source project at Oak Ridge National Laboratory. The proposed technique consists of a spallation neutron source in which neutrons are produced via the interaction of high-energy charged particles in a heavy metal target. Preliminary studies indicate that liquid mercury bombarded with GeV protons provides an excellent neutron source. Accordingly, a survey has been made of the available neutron cross-section data. Since it is expected that spectral modifiers, specifically moderators, will also be incorporated into the source design, the survey included thermal energy, resonance region, and high energy data. It was found that data of individual isotopes were almost non-existent and that the only evaluation found for the natural element had regions of missing data or discrepant data. Therefore, it appears that to achieve the desired degree of accuracy in the spallation source design it is necessary to re-evaluate the mercury database including making new measurements. During the presentation the currently available data will be presented and experiments proposed which can lead to design quality cross sections.

  17. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  18. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  19. Multiple Unicast Capacity of 2-Source 2-Sink Networks

    CERN Document Server

    Wang, Chenwei; Jafar, Syed A

    2011-01-01

    We study the sum capacity of multiple unicasts in wired and wireless multihop networks. With 2 source nodes and 2 sink nodes, there are a total of 4 independent unicast sessions (messages), one from each source to each sink node (this setting is also known as an X network). For wired networks with arbitrary connectivity, the sum capacity is achieved simply by routing. For wireless networks, we explore the degrees of freedom (DoF) of multihop X networks with a layered structure, allowing arbitrary number of hops, and arbitrary connectivity within each hop. For the case when there are no more than two relay nodes in each layer, the DoF can only take values 1, 4/3, 3/2 or 2, based on the connectivity of the network, for almost all values of channel coefficients. When there are arbitrary number of relays in each layer, the DoF can also take the value 5/3 . Achievability schemes incorporate linear forwarding, interference alignment and aligned interference neutralization principles. Information theoretic converse ...

  20. Genetic mixture of multiple source populations accelerates invasive range expansion.

    Science.gov (United States)

    Wagner, Natalie K; Ochocki, Brad M; Crawford, Kerri M; Compagnoni, Aldo; Miller, Tom E X

    2017-01-01

    A wealth of population genetic studies have documented that many successful biological invasions stem from multiple introductions from genetically distinct source populations. Yet, mechanistic understanding of whether and how genetic mixture promotes invasiveness has lagged behind documentation that such mixture commonly occurs. We conducted a laboratory experiment to test the influence of genetic mixture on the velocity of invasive range expansion. The mechanistic basis for effects of genetic mixture could include evolutionary responses (mixed invasions may harbour greater genetic diversity and thus elevated evolutionary potential) and/or fitness advantages of between-population mating (heterosis). If driven by evolution, positive effects of source population mixture should increase through time, as selection sculpts genetic variation. If driven by heterosis, effects of mixture should peak following first reproductive contact and then dissipate. Using a laboratory model system (beetles spreading through artificial landscapes), we quantified the velocity of range expansion for invasions initiated with one, two, four or six genetic sources over six generations. Our experiment was designed to test predictions corresponding to the evolutionary and heterosis mechanisms, asking whether any effects of genetic mixture occurred in early or later generations of range expansion. We also quantified demography and dispersal for each experimental treatment, since any effects of mixture should be manifest in one or both of these traits. Over six generations, invasions with any amount of genetic mixture (two, four and six sources) spread farther than single-source invasions. Our data suggest that heterosis provided a 'catapult effect', leaving a lasting signature on range expansion even though the benefits of outcrossing were transient. Individual-level trait data indicated that genetic mixture had positive effects on local demography (reduced extinction risk and enhanced

  1. Compilation of a source profile database for hydrocarbon and OVOC emissions in China

    Science.gov (United States)

    Mo, Ziwei; Shao, Min; Lu, Sihua

    2016-10-01

    Source profiles are essential for quantifying the role of volatile organic compound (VOC) emissions in air pollution. This study compiled a database of VOC source profiles in China, with 75 species drawn from five major categories: transportation, solvent use, biomass burning, fossil fuel burning, and industrial processes. Source profiles were updated for diesel vehicles, biomass burning, and residential coal burning by measuring both hydrocarbons and oxygenated VOCs (OVOCs), while other source profiles were derived from the available literature. The OVOCs contributed 53.8% of total VOCs in the profiles of heavy - duty diesel vehicle exhaust and 12.4%-46.3% in biomass and residential coal burning, which indicated the importance of primary OVOCs emissions from combustion-related sources. Taking the national emission inventory from 2008 as an example, we established an approach for assigning source profiles to develop a speciation-specific VOC and OVOC emission inventory. The results showed that aromatics contributed 30% of the total 26 Tg VOCs, followed by alkanes (24%), alkenes (19%) and OVOCs (12%). Aromatics (7.9 Tg) were much higher than in previous results (1.1 Tg and 3.4 Tg), while OVOCs (3.1 Tg) were comparable with the 3.3 Tg and 4.3 Tg reported in studies using profiles from the US. The current emission inventories were built based on emission factors from non-methane hydrocarbon measurements, and therefore the proportions from OVOC emissions was neglected, leading to up to 30% underestimation of total VOC emissions. As a result, there is a need to deploy appropriate emission factors and source profiles that include OVOC measurements to reduce the uncertainty of estimated emissions and chemical reactivity potential.

  2. Raman database of amino acids solutions: A critical study of Extended Multiplicative Signal Correction

    KAUST Repository

    Candeloro, Patrizio

    2013-01-01

    The Raman spectra of biological materials always exhibit complex profiles, constituting several peaks and/or bands which arise due to the large variety of biomolecules. The extraction of quantitative information from these spectra is not a trivial task. While qualitative information can be retrieved from the changes in peaks frequencies or from the appearance/disappearance of some peaks, quantitative analysis requires an examination of peak intensities. Unfortunately in biological samples it is not easy to identify a reference peak for normalizing intensities, and this makes it very difficult to study the peak intensities. In the last decades a more refined mathematical tool, the extended multiplicative signal correction (EMSC), has been proposed for treating infrared spectra, which is also capable of providing quantitative information. From the mathematical and physical point of view, EMSC can also be applied to Raman spectra, as recently proposed. In this work the reliability of the EMSC procedure is tested by application to a well defined biological system: the 20 standard amino acids and their combination in peptides. The first step is the collection of a Raman database of these 20 amino acids, and subsequently EMSC processing is applied to retrieve quantitative information from amino acids mixtures and peptides. A critical review of the results is presented, showing that EMSC has to be carefully handled for complex biological systems. © 2013 The Royal Society of Chemistry.

  3. Testability evaluation using prior information of multiple sources

    Institute of Scientific and Technical Information of China (English)

    Wang Chao; Qiu Jing; Liu Guanjun; Zhang Yong

    2014-01-01

    Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR) and fault isolation rate (FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testabil-ity demonstration test data (TDTD) such as low evaluation confidence and inaccurate result, a test-ability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.

  4. Testability evaluation using prior information of multiple sources

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2014-08-01

    Full Text Available Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR and fault isolation rate (FIR, which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data (TDTD such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF, and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.

  5. 75 FR 54073 - Medicaid Program; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug...

    Science.gov (United States)

    2010-09-03

    ... Average Manufacturer Price, Multiple Source Drug Definition, and Upper Limits for Multiple Source Drugs... the statutory definition of `average manufacturer price' or the statutory definition of `multiple... provisions we are proposing to withdraw are as follows: The determination of average manufacturer price (AMP...

  6. Medicaid program; withdrawal of determination of average manufacturer prices, multiple source drug definition, and upper limits for multiple source drugs. Final rule.

    Science.gov (United States)

    2010-11-15

    This final rule withdraws two provisions from the "Medicaid Program; Prescription Drugs'' final rule (referred to hereafter as "AMP final rule") published in the July 17, 2007 Federal Register. The provisions we are withdrawing are as follows: The determination of average manufacturer price, and the Federal upper limits for multiple source drugs. We are also withdrawing the definition of "multiple source drug" as it was revised in the ``Medicaid Program; Multiple Source Drug Definition'' final rule published in the October 7, 2008 Federal Register.

  7. Combining data from multiple sources using the CUAHSI Hydrologic Information System

    Science.gov (United States)

    Tarboton, D. G.; Ames, D. P.; Horsburgh, J. S.; Goodall, J. L.

    2012-12-01

    The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has developed a Hydrologic Information System (HIS) to provide better access to data by enabling the publication, cataloging, discovery, retrieval, and analysis of hydrologic data using web services. The CUAHSI HIS is an Internet based system comprised of hydrologic databases and servers connected through web services as well as software for data publication, discovery and access. The HIS metadata catalog lists close to 100 web services registered to provide data through this system, ranging from large federal agency data sets to experimental watersheds managed by University investigators. The system's flexibility in storing and enabling public access to similarly formatted data and metadata has created a community data resource from governmental and academic data that might otherwise remain private or analyzed only in isolation. Comprehensive understanding of hydrology requires integration of this information from multiple sources. HydroDesktop is the client application developed as part of HIS to support data discovery and access through this system. HydroDesktop is founded on an open source GIS client and has a plug-in architecture that has enabled the integration of modeling and analysis capability with the functionality for data discovery and access. Model integration is possible through a plug-in built on the OpenMI standard and data visualization and analysis is supported by an R plug-in. This presentation will demonstrate HydroDesktop, showing how it provides an analysis environment within which data from multiple sources can be discovered, accessed and integrated.

  8. Does NoSQL have a place in GIS? - An open-source spatial database performance comparison with proven RDBMS

    OpenAIRE

    McCarthy, Christopher

    2014-01-01

    With the relational database model being more than 40 years old, combined with the continuously increasing use of ‘big data’, NoSQL systems are marketed as providing a more efficient means of dealing with large quantities of usually unstructured data. NoSQL systems may provide advantages over relational databases but generally lack the relational robustness for those advantages. This project attempts to contribute to the GIS field in comparing Open-Source RDBMS and NoSQL systems, storing ...

  9. Expanding the use of administrative claims databases in conducting clinical real-world evidence studies in multiple sclerosis.

    Science.gov (United States)

    Capkun, Gorana; Lahoz, Raquel; Verdun, Elisabetta; Song, Xue; Chen, Weston; Korn, Jonathan R; Dahlke, Frank; Freitas, Rita; Fraeman, Kathy; Simeone, Jason; Johnson, Barbara H; Nordstrom, Beth

    2015-05-01

    Administrative claims databases provide a wealth of data for assessing the effect of treatments in clinical practice. Our aim was to propose methodology for real-world studies in multiple sclerosis (MS) using these databases. In three large US administrative claims databases: MarketScan, PharMetrics Plus and Department of Defense (DoD), patients with MS were selected using an algorithm identified in the published literature and refined for accuracy. Algorithms for detecting newly diagnosed ('incident') MS cases were also refined and tested. Methodology based on resource and treatment use was developed to differentiate between relapses with and without hospitalization. When various patient selection criteria were applied to the MarketScan database, an algorithm requiring two MS diagnoses at least 30 days apart was identified as the preferred method of selecting patient cohorts. Attempts to detect incident MS cases were confounded by the limited continuous enrollment of patients in these databases. Relapse detection algorithms identified similar proportions of patients in the MarketScan and PharMetrics Plus databases experiencing relapses with (2% in both databases) and without (15-20%) hospitalization in the 1 year follow-up period, providing findings in the range of those in the published literature. Additional validation of the algorithms proposed here would increase their credibility. The methods suggested in this study offer a good foundation for performing real-world research in MS using administrative claims databases, potentially allowing evidence from different studies to be compared and combined more systematically than in current research practice.

  10. Federated or cached searches: Providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  11. Federated or cached searches:Providing expected performance from multiple invasive species databases

    Institute of Scientific and Technical Information of China (English)

    Jim GRAHAM; Catherine S.JARNEVICH; Annie SIMPSON; Gregory J.NEWMAN; Thomas J.STOHLGREN

    2011-01-01

    Invasive species are a universal global problem,but the information to identify them,manage them,and prevent invasions is stored around the globe in a variety of formats.The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet.A distributed network of databases can be created using the Intemet and a standard web service protocol.There are two options to provide this integration.First,federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species.A second method is to create a cache of data from the databases for searching.We compare these two methods,and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  12. Effect of multiple-source entry on price competition after patent expiration in the pharmaceutical industry.

    OpenAIRE

    Suh, D.C.; Manning, W G; Schondelmeyer, S; Hadsall, R S

    2000-01-01

    OBJECTIVE: To analyze the effect of multiple-source drug entry on price competition after patent expiration in the pharmaceutical industry. DATA SOURCES: Originators and their multiple-source drugs selected from the 35 chemical entities whose patents expired from 1984 through 1987. Data were obtained from various primary and secondary sources for the patents' expiration dates, sales volume and units sold, and characteristics of drugs in the sample markets. STUDY DESIGN: The study was designed...

  13. A Dynamic Solution for Document Redundancies in Multiple Sources

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the development of IT,more andmore document resources are available over the Internet.Inorder to facilitate users’retrieval of the digital documents,Integrations of the multi source systems are necessary,Sincethe individual sources collect their information independently,the same papers may be stored in different source systems.The traditional solutions to the redundancy problems in thedistributed environments are usually based on the globalcatalogs which keep the redundancy information for thesyst...

  14. Relationship between exposure to multiple noise sources and noise annoyance

    NARCIS (Netherlands)

    Miedema, H.M.E.

    2004-01-01

    Relationships between exposure to noise [metric: day-night level (DNL) or day-evening-night level (DENL)] from a single source (aircraft, road traffic, or railways) and annoyance based on a large international dataset have been published earlier. Also for stationary sources relationships have been

  15. A localization model to localize multiple sources using Bayesian inference

    Science.gov (United States)

    Dunham, Joshua Rolv

    Accurate localization of a sound source in a room setting is important in both psychoacoustics and architectural acoustics. Binaural models have been proposed to explain how the brain processes and utilizes the interaural time differences (ITDs) and interaural level differences (ILDs) of sound waves arriving at the ears of a listener in determining source location. Recent work shows that applying Bayesian methods to this problem is proving fruitful. In this thesis, pink noise samples are convolved with head-related transfer functions (HRTFs) and compared to combinations of one and two anechoic speech signals convolved with different HRTFs or binaural room impulse responses (BRIRs) to simulate room positions. Through exhaustive calculation of Bayesian posterior probabilities and using a maximal likelihood approach, model selection will determine the number of sources present, and parameter estimation will result in azimuthal direction of the source(s).

  16. Understanding genetic toxicity through data mining: the process of building knowledge by integrating multiple genetic toxicity databases.

    Science.gov (United States)

    Yang, C; Hasselgren, C H; Boyer, S; Arvidson, K; Aveston, S; Dierkes, P; Benigni, R; Benz, R D; Contrera, J; Kruhlak, N L; Matthews, E J; Han, X; Jaworska, J; Kemper, R A; Rathman, J F; Richard, A M

    2008-01-01

    ABSTRACT Genetic toxicity data from various sources were integrated into a rigorously designed database using the ToxML schema. The public database sources include the U.S. Food and Drug Administration (FDA) submission data from approved new drug applications, food contact notifications, generally recognized as safe food ingredients, and chemicals from the NTP and CCRIS databases. The data from public sources were then combined with data from private industry according to ToxML criteria. The resulting "integrated" database, enriched in pharmaceuticals, was used for data mining analysis. Structural features describing the database were used to differentiate the chemical spaces of drugs/candidates, food ingredients, and industrial chemicals. In general, structures for drugs/candidates and food ingredients are associated with lower frequencies of mutagenicity and clastogenicity, whereas industrial chemicals as a group contain a much higher proportion of positives. Structural features were selected to analyze endpoint outcomes of the genetic toxicity studies. Although most of the well-known genotoxic carcinogenic alerts were identified, some discrepancies from the classic Ashby-Tennant alerts were observed. Using these influential features as the independent variables, the results of four types of genotoxicity studies were correlated. High Pearson correlations were found between the results of Salmonella mutagenicity and mouse lymphoma assay testing as well as those from in vitro chromosome aberration studies. This paper demonstrates the usefulness of representing a chemical by its structural features and the use of these features to profile a battery of tests rather than relying on a single toxicity test of a given chemical. This paper presents data mining/profiling methods applied in a weight-of-evidence approach to assess potential for genetic toxicity, and to guide the development of intelligent testing strategies.

  17. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  18. Automated 3D Scene Reconstruction from Open Geospatial Data Sources: Airborne Laser Scanning and a 2D Topographic Database

    OpenAIRE

    Lingli Zhu; Matti Lehtomäki; Juha Hyyppä; Eetu Puttonen; Anssi Krooks; Hannu Hyyppä

    2015-01-01

    Open geospatial data sources provide opportunities for low cost 3D scene reconstruction. In this study, based on a sparse airborne laser scanning (ALS) point cloud (0.8 points/m2) obtained from open source databases, a building reconstruction pipeline for CAD building models was developed. The pipeline includes voxel-based roof patch segmentation, extraction of the key-points representing the roof patch outline, step edge identification and adjustment, and CAD building model generation. The a...

  19. 75 FR 69591 - Medicaid Program; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug...

    Science.gov (United States)

    2010-11-15

    ... of Average Manufacturer Price, Multiple Source Drug Definition, and Upper Limits for Multiple Source....502 ``Definitions'' was intended to apply to both AMP and best price calculations. While the... Determination of Best Price (Sec. 447.505). Therefore, we see no need to withdraw the definition of bona fide...

  20. Metasurface Cloak Performance Near-by Multiple Line Sources and PEC Cylindrical Objects

    DEFF Research Database (Denmark)

    Arslanagic, Samel; Yatman, William H.; Pehrson, Signe

    2014-01-01

    The performance/robustness of metasurface cloaks to a complex field environment which may represent a realistic scenario of radiating sources is presently reported. Attention is devoted to the cloak operation near-by multiple line sources and multiple perfectly electrically conducting cylinders...

  1. Development of an enterprise-wide clinical data repository: merging multiple legacy databases.

    Science.gov (United States)

    Scully, K W; Pates, R D; Desper, G S; Connors, A F; Harrell, F E; Pieper, K S; Hannan, R L; Reynolds, R E

    1997-01-01

    We describe the development of a clinical data repository whose core consists of four years of inpatient administrative and billing data from the mainframe legacy systems of the University of Virginia Health System (UVAHS). To these data we have linked a cardiac surgery clinical database and our physician billing data (inpatient and outpatient). Other databases will be merged in the future. A relational database management system (Sybase) running on a dedicated IBM RS/6000 minicomputer was employed to assemble 2.5 Gigabytes of core data describing approximately 100,000 hospital admissions over the four year period. To enable convenient data queries, the system has been equipped with a custom-built WWW user interface, which generates Structured Query Language (SQL) automatically. We illustrate the rapid reporting capabilities of the resulting system with reference to patients undergoing coronary artery bypass graft surgery (CABG). We conclude that this information system: a) constitutes a convenient and low-cost method to increase data availability across the UVAHS; b) provides clinicians with a tool for surveillance of patient care and outcomes; c) forms the core of a comprehensive database from which clinical research may proceed; d) provides a flexible interface empowering a wide variety of clinical departments to share and enrich their own clinical data.

  2. Using Advice from Multiple Sources to Revise and Improve Judgments

    Science.gov (United States)

    Yaniv, Ilan; Milyavsky, Maxim

    2007-01-01

    How might people revise their opinions on the basis of multiple pieces of advice? What sort of gains could be obtained from rules for using advice? In the present studies judges first provided their initial estimates for a series of questions; next they were presented with several (2, 4, or 8) opinions from an ecological pool of advisory estimates…

  3. Evaluation of the Legibility for Characters Composed of Multiple Point Sources in Fog

    Science.gov (United States)

    Tsukada, Yuki; Toyofuku, Yoshinori; Aoki, Yoshiro

    The luminance conditions were investigated, at that the characters composed of multiple point sources were as legible as a character having a uniformly luminous surface in fog, in order to make the use of variable-message signs practical at airports. As the results, it was found that the thicker the fog or the higher the illuminance, the better the legibility of the point source characters become compared with the uniformly luminous surface characters. It is supposed that the ease of extracting each individual point source makes the characters composed of multiple point sources more legible even if their luminance is low. So the results show that if the conventional luminance standard is applied to the average luminance of a character composed of multiple point sources, a character composed of multiple point sources could be recognized without any degradation in legibility.

  4. Multiple sources and multiple measures based traffic flow prediction using the chaos theory and support vector regression method

    Science.gov (United States)

    Cheng, Anyu; Jiang, Xiao; Li, Yongfu; Zhang, Chao; Zhu, Hao

    2017-01-01

    This study proposes a multiple sources and multiple measures based traffic flow prediction algorithm using the chaos theory and support vector regression method. In particular, first, the chaotic characteristics of traffic flow associated with the speed, occupancy, and flow are identified using the maximum Lyapunov exponent. Then, the phase space of multiple measures chaotic time series are reconstructed based on the phase space reconstruction theory and fused into a same multi-dimensional phase space using the Bayesian estimation theory. In addition, the support vector regression (SVR) model is designed to predict the traffic flow. Numerical experiments are performed using the data from multiple sources. The results show that, compared with the single measure, the proposed method has better performance for the short-term traffic flow prediction in terms of the accuracy and timeliness.

  5. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML Database – Xindice

    Directory of Open Access Journals (Sweden)

    Li Jianling

    2006-01-01

    Full Text Available Abstract Background Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. Results The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Conclusion Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  6. Multiple sources of soluble atmospheric iron to Antarctic waters

    Science.gov (United States)

    Winton, V. H. L.; Edwards, R.; Delmonte, B.; Ellis, A.; Andersson, P. S.; Bowie, A.; Bertler, N. A. N.; Neff, P.; Tuohy, A.

    2016-03-01

    The Ross Sea, Antarctica, is a highly productive region of the Southern Ocean. Significant new sources of iron (Fe) are required to sustain phytoplankton blooms in the austral summer. Atmospheric deposition is one potential source. The fractional solubility of Fe is an important variable determining Fe availability for biological uptake. To constrain aerosol Fe inputs to the Ross Sea region, fractional solubility of Fe was analyzed in a snow pit from Roosevelt Island, eastern Ross Sea. In addition, aluminum, dust, and refractory black carbon (rBC) concentrations were analyzed, to determine the contribution of mineral dust and combustion sources to the supply of aerosol Fe. We estimate exceptionally high dissolved Fe (dFe) flux of 1.2 × 10-6 g m-2 y-1 and total dissolvable Fe flux of 140 × 10-6 g m-2 y-1 for 2011/2012. Deposition of dust, Fe, Al, and rBC occurs primarily during spring-summer. The observed background fractional Fe solubility of ~0.7% is consistent with a mineral dust source. Radiogenic isotopic ratios and particle size distribution of dust indicates that the site is influenced by local and remote sources. In 2011/2012 summer, relatively high dFe concentrations paralleled both mineral dust and rBC deposition. Around half of the annual aerosol Fe deposition occurred in the austral summer phytoplankton growth season; however, the fractional Fe solubility was low. Our results suggest that the seasonality of dFe deposition can vary and should be considered on longer glacial-interglacial timescales.

  7. Multiple Chemical Sources Localization Using Virtual Physics-Based Robots with Release Strategy

    Directory of Open Access Journals (Sweden)

    Yuli Zhang

    2015-01-01

    Full Text Available This paper presents a novel method of simultaneously locating chemical sources by a virtual physics-based multirobot system with a release strategy. The proposed release strategy includes setting forbidden area, releasing the robots from declared sources and escaping from it by a rotary force and goal force. This strategy can avoid the robots relocating the same source which has been located by other robots and leading them to move toward other sources. Various turbulent plume environments are simulated by Fluent and Gambit software, and a set of simulations are performed on different scenarios using a group of six robots or parallel search by multiple groups’ robots to validate the proposed methodology. The experimental results show that release strategy can be successfully used to find multiple chemical sources, even when multiple plumes overlap. It can also extend the operation of many chemical source localization algorithms developed for single source localization.

  8. Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information

    CERN Document Server

    Rajesh, R

    2008-01-01

    We consider the problem of transmission of several distributed sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The source and/or the channel could have continuous alphabets (thus Gaussian sources and Gaussian MACs are special cases). Various previous results are obtained as special cases. We also provide several good joint source-channel coding schemes for a discrete/continuous source and discrete/continuous alphabet channel. Channels with feedback and fading are also considered. Keywords: Multiple access channel, side information, lossy joint source-channel coding, channels with feedback, fading channels.

  9. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    Science.gov (United States)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  10. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT-MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya; Wei Gang

    2003-01-01

    Blind identification-blind equalization for Finite Impulse Response (FIR) Multiple Input-Multiple Output (MIMO) channels can be reformulated as the problem of blind sources separation. It has been shown that blind identification via decorrelating sub-channels method could recover the input sources. The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators, which decorrelate the output signals of subchannels, and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix. In this paper, a new approximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed. The proposed method outperforms BIDS in the presence of additive white Gaussian noise.

  11. MAXIMUM LIKELIHOOD SOURCE SEPARATION FOR FINITE IMPULSE RESPONSE MULTIPLE INPUT—MULTIPLE OUTPUT CHANNELS IN THE PRESENCE OF ADDITIVE NOISE

    Institute of Scientific and Technical Information of China (English)

    AaziTakpaya; WeiGang

    2003-01-01

    Blind identification-blind equalization for finite Impulse Response(FIR)Multiple Input-Multiple Output(MIMO)channels can be reformulated as the problem of blind sources separation.It has been shown that blind identification via decorrelating sub-channels method could recover the input sources.The Blind Identification via Decorrelating Sub-channels(BIDS)algorithm first constructs a set of decorrelators,which decorrelate the output signals of subchannels,and then estimates the channel matrix using the transfer functions of the decorrelators and finally recovers the input signal using the estimated channel matrix.In this paper,a new qpproximation of the input source for FIR-MIMO channels based on the maximum likelihood source separation method is proposed.The proposed method outperforms BIDS in the presence of additive white Garssian noise.

  12. Combining Multiple Knowledge Sources for Continuous Speech Recognition

    Science.gov (United States)

    1989-08-01

    phonetic , phonological , and grammatical knowledge. The complete system, called BYBLOS, has been shown to achieve the highest recognition accuracy to...speech which is capable of incorporating knowl- edge from several sources, including lexical, phonetic , phonological , and grammatical knowledge. The...accuracy. Some of these include:I a phonetic lexicon specifying the most likely pronunciations for each word, extended by a set of phonological rules

  13. Shape reconstruction of irregular bodies with multiple complementary data sources

    Science.gov (United States)

    Kaasalainen, M.; Viikinkoski, M.; Carry, B.; Durech, J.; Lamy, P.; Jorda, L.; Marchis, F.; Hestroffer, D.

    2011-10-01

    Irregularly shaped bodies with at most partial in situ data are a particular challenge for shape reconstruction and mapping. We have created an inversion algorithm and software package for complementary data sources, with which it is possible to create shape and spin models with feature details even when only groundbased data are available. The procedure uses photometry, adaptive optics or other images, occultation timings, and interferometry as main data sources, and we are extending it to include range-Doppler radar and thermal infrared data as well. The data sources are described as generalized projections in various observable spaces [2], which allows their uniform handling with essentially the same techniques, making the addition of new data sources inexpensive in terms of computation time or software development. We present a generally applicable shape support that can be automatically used for all surface types, including strongly nonconvex or non-starlike shapes. New models of Kleopatra (from photometry, adaptive optics, and interferometry) and Hermione are examples of this approach. When using adaptive optics images, the main information from these is extracted from the limb and terminator contours that can be determined much more accurately than the image pixel brightnesses that inevitably contain large errors for most targets. We have shown that the contours yield a wealth of information independent of the scattering properties of the surface [3]. Their use also facilitates a very fast and robustly converging algorithm. An important concept in the inversion is the optimal weighting of the various data modes. We have developed a mathematicallly rigorous scheme for this purpose. The resulting maximum compatibility estimate [3], a multimodal generalization of the maximum likelihood estimate, ensures that the actual information content of each source is properly taken into account, and that the resolution scale of the ensuing model can be reliably estimated

  14. Multiple sources of boron in urban surface waters and groundwaters

    Energy Technology Data Exchange (ETDEWEB)

    Hasenmueller, Elizabeth A., E-mail: eahasenm@wustl.edu; Criss, Robert E.

    2013-03-01

    Previous studies attribute abnormal boron (B) levels in streams and groundwaters to wastewater and fertilizer inputs. This study shows that municipal drinking water used for lawn irrigation contributes substantial non-point loads of B and other chemicals (S-species, Li, and Cu) to surface waters and shallow groundwaters in the St. Louis, Missouri, area. Background levels and potential B sources were characterized by analysis of lawn and street runoff, streams, rivers, springs, local rainfall, wastewater influent and effluent, and fertilizers. Urban surface waters and groundwaters are highly enriched in B (to 250 μg/L) compared to background levels found in rain and pristine, carbonate-hosted streams and springs (< 25 μg/L), but have similar concentrations (150 to 259 μg/L) compared to municipal drinking waters derived from the Missouri River. Other data including B/SO{sub 4}{sup 2-}−S and B/Li ratios confirm major contributions from this source. Moreover, sequential samples of runoff collected during storms show that B concentrations decrease with increased discharge, proving that elevated B levels are not primarily derived from combined sewer overflows (CSOs) during flooding. Instead, non-point source B exhibits complex behavior depending on land use. In urban settings B is rapidly mobilized from lawns during “first flush” events, likely representing surficial salt residues from drinking water used to irrigate lawns, and is also associated with the baseflow fraction, likely derived from the shallow groundwater reservoir that over time accumulates B from drinking water that percolates into the subsurface. The opposite occurs in small rural watersheds, where B is leached from soils by recent rainfall and covaries with the event water fraction. Highlights: ► Boron sources and loads differ between urban and rural watersheds. ► Wastewaters are not the major boron source in small St. Louis, MO watersheds. ► Municipal drinking water used for lawn

  15. A multiple-source consecutive localization algorithm based on quantized measurement for wireless sensor network

    Science.gov (United States)

    Chu, Hao; Wu, Chengdong

    2016-10-01

    The source localization base on wireless sensor network has attracted considerable attention in recent years. However, most of the previous works focus on the accurate measurement or single source localization. The multiple-source localization has extensive application prospect in many fields. The quantized measurement is a low-cost and low energy consumption solution for wireless sensor network. In this paper, we present a novel multiple-source consecutive localization algorithm using the quantized measurement. We first introduce the multiple acoustic sources model and quantized measurement method. Then the maximum likelihood method is used to establish the localization function and the particle swarm optimization is employed to estimate the initial position of the source. Finally the Kalman filter is used to mitigate the random processing noise. Simulation results show that the proposed method owns high localization accuracy.

  16. A robust poverty profile for Brazil using multiple data sources

    Directory of Open Access Journals (Sweden)

    Ferreira Francisco H. G.

    2003-01-01

    Full Text Available This paper presents a poverty profile for Brazil, based on three different sources of household data for 1996. We use PPV consumption data to estimate poverty and indigence lines. ''Contagem'' data is used to allow for an unprecedented refinement of the country's poverty map. Poverty measures and shares are also presented for a wide range of population subgroups, based on the PNAD 1996, with new adjustments for imputed rents and spatial differences in cost of living. Robustness of the profile is verified with respect to different poverty lines, spatial price deflators, and equivalence scales. Overall poverty incidence ranges from 23% with respect to an indigence line to 45% with respect to a more generous poverty line. More importantly, however, poverty is found to vary significantly across regions and city sizes, with rural areas, small and medium towns and the metropolitan peripheries of the North and Northeast regions being poorest.

  17. Data integration and knowledge discovery in biomedical databases. Reliable information from unreliable sources

    Directory of Open Access Journals (Sweden)

    A Mitnitski

    2003-01-01

    Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.

  18. Automated 3D Scene Reconstruction from Open Geospatial Data Sources: Airborne Laser Scanning and a 2D Topographic Database

    Directory of Open Access Journals (Sweden)

    Lingli Zhu

    2015-05-01

    Full Text Available Open geospatial data sources provide opportunities for low cost 3D scene reconstruction. In this study, based on a sparse airborne laser scanning (ALS point cloud (0.8 points/m2 obtained from open source databases, a building reconstruction pipeline for CAD building models was developed. The pipeline includes voxel-based roof patch segmentation, extraction of the key-points representing the roof patch outline, step edge identification and adjustment, and CAD building model generation. The advantages of our method lie in generating CAD building models without the step of enforcing the edges to be parallel or building regularization. Furthermore, although it has been challenging to use sparse datasets for 3D building reconstruction, our result demonstrates the great potential in such applications. In this paper, we also investigated the applicability of open geospatial datasets for 3D road detection and reconstruction. Road central lines were acquired from an open source 2D topographic database. ALS data were utilized to obtain the height and width of the road. A constrained search method (CSM was developed for road width detection. The CSM method was conducted by splitting a given road into patches according to height and direction criteria. The road edges were detected patch by patch. The road width was determined by the average distance from the edge points to the central line. As a result, 3D roads were reconstructed from ALS and a topographic database.

  19. Joint source-channel coding for a quantum multiple access channel

    Science.gov (United States)

    Wilde, Mark M.; Savov, Ivan

    2012-11-01

    Suppose that two senders each obtain one share of the output of a classical, bivariate, correlated information source. They would like to transmit the correlated source to a receiver using a quantum multiple access channel. In prior work, Cover, El Gamal and Salehi provided a combined source-channel coding strategy for a classical multiple access channel which outperforms the simpler ‘separation’ strategy where separate codebooks are used for the source coding and the channel coding tasks. In this paper, we prove that a coding strategy similar to the Cover-El Gamal-Salehi strategy and a corresponding quantum simultaneous decoder allow for the reliable transmission of a source over a quantum multiple access channel, as long as a set of information inequalities involving the Holevo quantity hold.

  20. Joint source-channel coding for a quantum multiple access channel

    CERN Document Server

    Wilde, Mark M

    2012-01-01

    Suppose that two senders each obtain one share of the output of a classical, bivariate, correlated information source. They would like to transmit the correlated source to a receiver using a quantum multiple access channel. In prior work, Cover, El Gamal, and Salehi provided a combined source-channel coding strategy for a classical multiple access channel which outperforms the simpler "separation" strategy where separate codebooks are used for the source coding and the channel coding tasks. In the present paper, we prove that a coding strategy similar to the Cover-El Gamal-Salehi strategy and a corresponding quantum simultaneous decoder allow for the reliable transmission of a source over a quantum multiple access channel, as long as a set of information inequalities involving the Holevo quantity hold.

  1. Genetic diversity and antimicrobial resistance of Escherichia coli from human and animal sources uncovers multiple resistances from human sources.

    Science.gov (United States)

    Ibekwe, A Mark; Murinda, Shelton E; Graves, Alexandria K

    2011-01-01

    Escherichia coli are widely used as indicators of fecal contamination, and in some cases to identify host sources of fecal contamination in surface water. Prevalence, genetic diversity and antimicrobial susceptibility were determined for 600 generic E. coli isolates obtained from surface water and sediment from creeks and channels along the middle Santa Ana River (MSAR) watershed of southern California, USA, after a 12 month study. Evaluation of E. coli populations along the creeks and channels showed that E. coli were more prevalent in sediment compared to surface water. E. coli populations were not significantly different (P = 0.05) between urban runoff sources and agricultural sources, however, E. coli genotypes determined by pulsed-field gel electrophoresis (PFGE) were less diverse in the agricultural sources than in urban runoff sources. PFGE also showed that E. coli populations in surface water were more diverse than in the sediment, suggesting isolates in sediment may be dominated by clonal populations.Twenty four percent (144 isolates) of the 600 isolates exhibited resistance to more than one antimicrobial agent. Most multiple resistances were associated with inputs from urban runoff and involved the antimicrobials rifampicin, tetracycline, and erythromycin. The occurrence of a greater number of E. coli with multiple antibiotic resistances from urban runoff sources than agricultural sources in this watershed provides useful evidence in planning strategies for water quality management and public health protection.

  2. Updating river bathymetry with multiple data sources using kriging

    Science.gov (United States)

    Jha, S. K.; Bailey, B.; Minsker, B. S.; Cash, R. W.; Best, J. L.

    2011-12-01

    Understanding of spatially-distributed bathymetry at a range of spatial scales is important to understanding river and sediment dynamics. Most river sand dunes are 10-100m long but man-made features such as pipes, groynes, and piers can be less than a meter wide. Therefore it is necessary to conduct high-resolution survey measurements to accurately capture the spatial variation in bed profile. With rapidly changing bathymetry in large rivers, detailed surveys must be done frequently to capture short and long term changes in the river bed, but this is challenging for manually-intensive and expensive high-resolution surveys. In this paper, we propose the use of geostatistical models to update measurements from a periodic detailed survey, which is used as a baseline morphology, with less dense data collected from routine boat traffic equipped with less expensive sensors. Our study area is a six-kilometer reach of the Mississippi River. We obtain measurements of depth at different spatial and temporal resolutions from two types of data sources: detailed surveys using a multi-beam echosounder (MBES) bed profiler and routine depth data from two sensors installed on a boat making a single pass down the Mississippi River. The MBES measurements consist of latitude, longitude, and depth at a spatial resolution of 0.5m*0.5m, collected during three surveys over a period of one year. These three surveys were conducted immediately after seasons when the river experiences large variations in bed bathymetry. While conducting Survey3 measurements, we also measured latitude, longitude, and depth once per minute (approximately every 140 m) along the boat route using two single-beam depth sensors. A four-step methodology was then developed to rapidly update the baseline morphology and provide a near-real-time estimate of the bathymetry: (i) use Survey 2 measurements to estimate the variance structure and develop a geostatistical model; (ii) use boat measurements during Survey 3 to

  3. The risk of fracture in patients with multiple sclerosis: The UK general practice research database

    DEFF Research Database (Denmark)

    Bazelier, Marloes T; van Staa, Tjeerd; Uitdehaag, Bernard Mj;

    2011-01-01

    Patients with multiple sclerosis (MS) may be at an increased risk of fracture owing to a greater risk of falling and decreased bone mineral density when compared with the general population. This study was designed to estimate the relative and absolute risk of fracture in patients with MS. We...... were used to derive adjusted hazard ratios (HRs) for fracture associated with MS. Time-dependent adjustments were made for age, comorbidity, and drug use. Absolute 5- and 10-year risks of fracture were estimated for MS patients as a function of age. Compared with controls, MS patients had an almost...

  4. BioZone Exploting Source-Capability Information for Integrated Access to Multiple Bioinformatics Data Sources

    Energy Technology Data Exchange (ETDEWEB)

    Liu, L; Buttler, D; Paques, H; Pu, C; Critchlow

    2002-01-28

    Modern Bioinformatics data sources are widely used by molecular biologists for homology searching and new drug discovery. User-friendly and yet responsive access is one of the most desirable properties for integrated access to the rapidly growing, heterogeneous, and distributed collection of data sources. The increasing volume and diversity of digital information related to bioinformatics (such as genomes, protein sequences, protein structures, etc.) have led to a growing problem that conventional data management systems do not have, namely finding which information sources out of many candidate choices are the most relevant and most accessible to answer a given user query. We refer to this problem as the query routing problem. In this paper we introduce the notation and issues of query routing, and present a practical solution for designing a scalable query routing system based on multi-level progressive pruning strategies. The key idea is to create and maintain source-capability profiles independently, and to provide algorithms that can dynamically discover relevant information sources for a given query through the smart use of source profiles. Compared to the keyword-based indexing techniques adopted in most of the search engines and software, our approach offers fine-granularity of interest matching, thus it is more powerful and effective for handling queries with complex conditions.

  5. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    Science.gov (United States)

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  6. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    Science.gov (United States)

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  7. Investigating sources and pathways of perfluoroalkyl acids (PFAAs) in aquifers in Tokyo using multiple tracers.

    Science.gov (United States)

    Kuroda, Keisuke; Murakami, Michio; Oguma, Kumiko; Takada, Hideshige; Takizawa, Satoshi

    2014-08-01

    We employed a multi-tracer approach to investigate sources and pathways of perfluoroalkyl acids (PFAAs) in urban groundwater, based on 53 groundwater samples taken from confined aquifers and unconfined aquifers in Tokyo. While the median concentrations of groundwater PFAAs were several ng/L, the maximum concentrations of perfluorooctane sulfonate (PFOS, 990 ng/L), perfluorooctanoate (PFOA, 1800 ng/L) and perfluorononanoate (PFNA, 620 ng/L) in groundwater were several times higher than those of wastewater and street runoff reported in the literature. PFAAs were more frequently detected than sewage tracers (carbamazepine and crotamiton), presumably owing to the higher persistence of PFAAs, the multiple sources of PFAAs beyond sewage (e.g., surface runoff, point sources) and the formation of PFAAs from their precursors. Use of multiple methods of source apportionment including principal component analysis-multiple linear regression (PCA-MLR) and perfluoroalkyl carboxylic acid ratio analysis highlighted sewage and point sources as the primary sources of PFAAs in the most severely polluted groundwater samples, with street runoff being a minor source (44.6% sewage, 45.7% point sources and 9.7% street runoff, by PCA-MLR). Tritium analysis indicated that, while young groundwater (recharged during or after the 1970s, when PFAAs were already in commercial use) in shallow aquifers (groundwater (recharged before the 1950s, when PFAAs were not in use) in deep aquifers (50-500 m depth). This study demonstrated the utility of multiple uses of tracers (pharmaceuticals and personal care products; PPCPs, tritium) and source apportionment methods in investigating sources and pathways of PFAAs in multiple aquifer systems.

  8. The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

    Directory of Open Access Journals (Sweden)

    A.T. Akinwale

    2009-05-01

    Full Text Available In this work, attempt is made to select three good hash functions which uniformly distribute hash values that permute their internal states and allow the input bits to generate different output bits. These functions are used in different levels of hash tables that are coded in Java Programming Language and a quite number of data records serve as primary data for testing the performances. The result shows that the two-level hash tables with three different hash functions give a superior performance over one-level hash table with two hash functions or zero-level hash table with one function in term of reducing the conflict keys and quick lookup for a particular element. The result assists to reduce the complexity of join operation in query language from O( n2 to O( 1 by placing larger query result, if any, in multilevel hash tables with multiple hash functions and generate shorter query result.

  9. Investigating sources and pathways of perfluoroalkyl acids (PFAAs) in aquifers in Tokyo using multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Kuroda, Keisuke, E-mail: keisukekr@gmail.com [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan); Murakami, Michio [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8505 (Japan); Oguma, Kumiko [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan); Takada, Hideshige [Laboratory of Organic Geochemistry (LOG), Institute of Symbiotic Science and Technology, Tokyo University of Agriculture and Technology, Fuchu, Tokyo 183-8509 (Japan); Takizawa, Satoshi [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan)

    2014-08-01

    We employed a multi-tracer approach to investigate sources and pathways of perfluoroalkyl acids (PFAAs) in urban groundwater, based on 53 groundwater samples taken from confined aquifers and unconfined aquifers in Tokyo. While the median concentrations of groundwater PFAAs were several ng/L, the maximum concentrations of perfluorooctane sulfonate (PFOS, 990 ng/L), perfluorooctanoate (PFOA, 1800 ng/L) and perfluorononanoate (PFNA, 620 ng/L) in groundwater were several times higher than those of wastewater and street runoff reported in the literature. PFAAs were more frequently detected than sewage tracers (carbamazepine and crotamiton), presumably owing to the higher persistence of PFAAs, the multiple sources of PFAAs beyond sewage (e.g., surface runoff, point sources) and the formation of PFAAs from their precursors. Use of multiple methods of source apportionment including principal component analysis–multiple linear regression (PCA–MLR) and perfluoroalkyl carboxylic acid ratio analysis highlighted sewage and point sources as the primary sources of PFAAs in the most severely polluted groundwater samples, with street runoff being a minor source (44.6% sewage, 45.7% point sources and 9.7% street runoff, by PCA–MLR). Tritium analysis indicated that, while young groundwater (recharged during or after the 1970s, when PFAAs were already in commercial use) in shallow aquifers (< 50 m depth) was naturally highly vulnerable to PFAA pollution, PFAAs were also found in old groundwater (recharged before the 1950s, when PFAAs were not in use) in deep aquifers (50–500 m depth). This study demonstrated the utility of multiple uses of tracers (pharmaceuticals and personal care products; PPCPs, tritium) and source apportionment methods in investigating sources and pathways of PFAAs in multiple aquifer systems. - Highlights: • Aquifers in Tokyo had high levels of perfluoroalkyl acids (up to 1800 ng/L). • PFAAs were more frequently detected than sewage

  10. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu [The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0405 (United States); Horie, Y. [Air Force Research Lab, Munitions Directorate, 2306 Perimeter Road, Eglin AFB, Florida 32542 (United States)

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple sets is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.

  11. Separation of Correlated Astrophysical Sources Using Multiple-Lag Data Covariance Matrices

    Directory of Open Access Journals (Sweden)

    Baccigalupi C

    2005-01-01

    Full Text Available This paper proposes a new strategy to separate astrophysical sources that are mutually correlated. This strategy is based on second-order statistics and exploits prior information about the possible structure of the mixing matrix. Unlike ICA blind separation approaches, where the sources are assumed mutually independent and no prior knowledge is assumed about the mixing matrix, our strategy allows the independence assumption to be relaxed and performs the separation of even significantly correlated sources. Besides the mixing matrix, our strategy is also capable to evaluate the source covariance functions at several lags. Moreover, once the mixing parameters have been identified, a simple deconvolution can be used to estimate the probability density functions of the source processes. To benchmark our algorithm, we used a database that simulates the one expected from the instruments that will operate onboard ESA's Planck Surveyor Satellite to measure the CMB anisotropies all over the celestial sphere.

  12. Next generation sequencing (NGS database for tandem repeats with multiple pattern 2°-shaft multicore string matching

    Directory of Open Access Journals (Sweden)

    Chinta Someswara Rao

    2016-03-01

    Full Text Available Next generation sequencing (NGS technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching.

  13. Next generation sequencing (NGS) database for tandem repeats with multiple pattern 2°-shaft multicore string matching

    Science.gov (United States)

    Someswara Rao, Chinta; Raju, S. Viswanadha

    2016-01-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB) for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats) in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching. PMID:26981434

  14. Next generation sequencing (NGS) database for tandem repeats with multiple pattern 2°-shaft multicore string matching.

    Science.gov (United States)

    Someswara Rao, Chinta; Raju, S Viswanadha

    2016-03-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research in recent years. To provide the comprehensive NGS resource for the research, in this paper , we have considered 10 loci/codi/repeats TAGA, TCAT, GAAT, AGAT, AGAA, GATA, TATC, CTTT, TCTG and TCTA. Then we developed the NGS Tandem Repeat Database (TandemRepeatDB) for all the chromosomes of Homo sapiens, Callithrix jacchus, Chlorocebus sabaeus, Gorilla gorilla, Macaca fascicularis, Macaca mulatta, Nomascus leucogenys, Pan troglodytes, Papio anubis and Pongo abelii genome data sets for all those locis. We find the successive occurence frequency for all the above 10 SSR (simple sequence repeats) in the above genome data sets on a chromosome-by-chromosome basis with multiple pattern 2° shaft multicore string matching.

  15. Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases

    Science.gov (United States)

    Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.

    2014-01-01

    Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.

  16. Linked Patient-Reported Outcomes Data From Patients With Multiple Sclerosis Recruited on an Open Internet Platform to Health Care Claims Databases Identifies a Representative Population for Real-Life Data Analysis in Multiple Sclerosis.

    Science.gov (United States)

    Risson, Valery; Ghodge, Bhaskar; Bonzani, Ian C; Korn, Jonathan R; Medin, Jennie; Saraykar, Tanmay; Sengupta, Souvik; Saini, Deepanshu; Olson, Melvin

    2016-09-22

    An enormous amount of information relevant to public health is being generated directly by online communities. To explore the feasibility of creating a dataset that links patient-reported outcomes data, from a Web-based survey of US patients with multiple sclerosis (MS) recruited on open Internet platforms, to health care utilization information from health care claims databases. The dataset was generated by linkage analysis to a broader MS population in the United States using both pharmacy and medical claims data sources. US Facebook users with an interest in MS were alerted to a patient-reported survey by targeted advertisements. Eligibility criteria were diagnosis of MS by a specialist (primary progressive, relapsing-remitting, or secondary progressive), ≥12-month history of disease, age 18-65 years, and commercial health insurance. Participants completed a questionnaire including data on demographic and disease characteristics, current and earlier therapies, relapses, disability, health-related quality of life, and employment status and productivity. A unique anonymous profile was generated for each survey respondent. Each anonymous profile was linked to a number of medical and pharmacy claims datasets in the United States. Linkage rates were assessed and survey respondents' representativeness was evaluated based on differences in the distribution of characteristics between the linked survey population and the general MS population in the claims databases. The advertisement was placed on 1,063,973 Facebook users' pages generating 68,674 clicks, 3719 survey attempts, and 651 successfully completed surveys, of which 440 could be linked to any of the claims databases for 2014 or 2015 (67.6% linkage rate). Overall, no significant differences were found between patients who were linked and not linked for educational status, ethnicity, current or prior disease-modifying therapy (DMT) treatment, or presence of a relapse in the last 12 months. The frequencies of the

  17. Linked Patient-Reported Outcomes Data From Patients With Multiple Sclerosis Recruited on an Open Internet Platform to Health Care Claims Databases Identifies a Representative Population for Real-Life Data Analysis in Multiple Sclerosis

    Science.gov (United States)

    Ghodge, Bhaskar; Bonzani, Ian C; Korn, Jonathan R; Medin, Jennie; Saraykar, Tanmay; Sengupta, Souvik; Saini, Deepanshu; Olson, Melvin

    2016-01-01

    Background An enormous amount of information relevant to public health is being generated directly by online communities. Objective To explore the feasibility of creating a dataset that links patient-reported outcomes data, from a Web-based survey of US patients with multiple sclerosis (MS) recruited on open Internet platforms, to health care utilization information from health care claims databases. The dataset was generated by linkage analysis to a broader MS population in the United States using both pharmacy and medical claims data sources. Methods US Facebook users with an interest in MS were alerted to a patient-reported survey by targeted advertisements. Eligibility criteria were diagnosis of MS by a specialist (primary progressive, relapsing-remitting, or secondary progressive), ≥12-month history of disease, age 18-65 years, and commercial health insurance. Participants completed a questionnaire including data on demographic and disease characteristics, current and earlier therapies, relapses, disability, health-related quality of life, and employment status and productivity. A unique anonymous profile was generated for each survey respondent. Each anonymous profile was linked to a number of medical and pharmacy claims datasets in the United States. Linkage rates were assessed and survey respondents’ representativeness was evaluated based on differences in the distribution of characteristics between the linked survey population and the general MS population in the claims databases. Results The advertisement was placed on 1,063,973 Facebook users’ pages generating 68,674 clicks, 3719 survey attempts, and 651 successfully completed surveys, of which 440 could be linked to any of the claims databases for 2014 or 2015 (67.6% linkage rate). Overall, no significant differences were found between patients who were linked and not linked for educational status, ethnicity, current or prior disease-modifying therapy (DMT) treatment, or presence of a relapse in

  18. Radionuclide transport analysis considering the effects of multiple sources in a HRW repository

    Energy Technology Data Exchange (ETDEWEB)

    Hatanaka, Koichiro [Japan Nuclear Cycle Development Inst., Tokai Works, Tokai, Ibaraki (Japan)

    2002-06-01

    This study focused on the effect of multiple sources due to the disposal of high-level radioactive waste at different positions in the repository. By taking the effect of multiple sources into consideration, concentration interference in the repository region is possible. Therefore, a radionuclide transport model/code considering the effect of concentration interference due to multiple sources was developed to assess the effect quantitatively. The newly developed model/code was verified through comparison analysis with the existing radionuclide transport code used in the performance assessments analysis for the second progress report summarized by JNC. In addition, the effect of the concentration interference was evaluated by setting a simple one-dimensional problem. The result shows that the maximum peak value of the radionuclide transport rates from the repository was approximately two orders of magnitude lower than the analysis based on single canister configuration. (author)

  19. Existing data sources for clinical epidemiology: the Danish Quality Database of Mammography Screening

    Directory of Open Access Journals (Sweden)

    Langagergaard V

    2013-03-01

    Full Text Available Vivian Langagergaard,1 Jens P Garne,2 Ilse Vejborg,3 Walter Schwartz,4 Martin Bak,5 Anders Lernevall,1 Nikolaj B Mogensen,6 Heidi Larsson,7 Berit Andersen,1 Ellen M Mikkelsen7 1Department of Public Health Programs, Randers Hospital, Randers, Denmark; 2Department of Breast Surgery, Aalborg Hospital, Aalborg, Denmark; 3Diagnostic Imaging Center, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark; 4Center of Mammography, 5Department of Pathology, Odense University Hospital, Denmark; 6Department of Radiology, Ringsted Hospital, Ringsted, Denmark; 7Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Abstract: The Danish Quality Database of Mammography Screening (DKMS was established in 2007, when screening was implemented on a nationwide basis and offered biennially to all Danish women aged 50–69 years. The primary aims of the database are to monitor and evaluate the quality of the screening program and – after years of follow-up – to evaluate the effect of nationwide screening on breast cancer-specific mortality. Here, we describe the database and present results for quality assurance from the first round of national screening. The steering committee for the DKMS defined eleven organizational and clinical quality indicators and standards to monitor the Danish breast cancer screening program. We calculated the relevant proportions and ratios with 95% confidence intervals for each quality indicator. All indicators were assessed on a national and regional level. Of 670,039 women invited for mammography, 518,823 (77.4% participated. Seventy-one percent of the women received the result of their mammography examination within 10 days of screening, and 3% of the participants were recalled for further investigation. Among all detected cancers, 86% were invasive cancers, and the proportion of women with node negative cancer was 67%. There were 36% women with small cancers, and the ratio of surgery for

  20. The astrocosmic databases for multi-wavelength and cosmological properties of extragalactic sources

    Science.gov (United States)

    Vavilova, I. B.; Ivashchenko, G. Yu.; Babyk, Yu. V.; Sergijenko, O.; Dobrycheva, D. V.; Torbaniuk, O. O.; Vasylenko, A. A.; Pulatova, N. G.

    2015-12-01

    The article briefly describes the new specially-oriented Astro Space databases obtained with ground-based telescopes and space observatories. As a result, multi-wavelength spectral and physical properties of galaxies and galaxy clusters were analyzed in more details, particularly 1) to study the spectral properties of quasars and the distribution of matter in intergalactic scales using Lyman-alpha forest; 2) to study galaxies (including with active nuclei), especially for the formation of large-scale structures in the Universe and influence of the environment on the internal parameters of galaxies; 3) to estimate a visible and dark matter content in galaxy clusters and to test cosmological parameters and the evolution of matter in a wide range of age of the Universe.

  1. Joint part-of-speech and dependency projection from multiple sources

    DEFF Research Database (Denmark)

    Johannsen, Anders Trærup; Agic, Zeljko; Søgaard, Anders

    2016-01-01

    for multiple tasks from multiple source languages, relying on parallel corpora available for hundreds of languages. When training POS taggers and dependency parsers on jointly projected POS tags and syntactic dependencies using our algorithm, we obtain better performance than a standard approach on 20......Most previous work on annotation projectionhas been limited to a subset of IndoEuropean languages, using only a single source language, and projecting annotation for one task at a time. In contrast, we present an Integer Linear Programming (ILP) algorithm that simultaneously projects annotation...

  2. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Science.gov (United States)

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  3. Multiple concurrent sources localization based on a two-node distributed acoustic sensor network

    Science.gov (United States)

    Xu, Jiaxin; Zhao, Zhao; Chen, Chunzeng; Xu, Zhiyong

    2017-01-01

    In this work, we propose a new approach to localize multiple concurrent sources using a distributed acoustic sensor network. Only two node-arrays are required in this sensor network, and each node-array consists of only two widely spaced sensors. Firstly, direction-of-arrivals (DOAs) of multiple sources are estimated at each node-array by utilizing a new pooled angular spectrum proposed in this paper, which can implement the spatial aliasing suppression effectively. Based on minimum variance distortionless response (MVDR) beamforming and the DOA estimates of the sources, the time-frequency spectra containing the corresponding energy distribution features associated with those sources are reconstructed in each node-array. Then, scale invariant feature transform (SIFT) is employed to solve the DOA association problem. Performance evaluation is conducted with field recordings and experimental results prove the effectivity and feasibility of the proposed method.

  4. Multiple sclerosis: patients' information sources and needs on disease symptoms and management.

    Science.gov (United States)

    Matti, Albert I; McCarl, Helen; Klaer, Pamela; Keane, Miriam C; Chen, Celia S

    2010-06-24

    To investigate the current information sources of patients with multiple sclerosis (MS) in the early stages of their disease and to identify patients' preferred source of information. The relative amounts of information from the different sources were also compared. Participants at a newly diagnosed information session organized by the Multiple Sclerosis Society of South Australia were invited to complete a questionnaire. Participants were asked to rate on a visual analog scale how much information they had received about MS and optic neuritis from different information sources and how much information they would like to receive from each of the sources. A close to ideal amount of information is being provided by the MS society and MS specialist nurses. There is a clear deficit between what information patients are currently receiving and the amount of information they actually want from various sources. Patients wish to receive significantly more information from treating general practitioners, eye specialists, neurologists, and education sessions. Patients have identified less than adequate information received on optic neuritis from all sources. This study noted a clear information deficit regarding MS from all sources. This information deficit is more pronounced in relation to optic neuritis and needs to be addressed in the future. More patient information and counselling needs to be provided to MS patients even at early stages of their disease, especially in relation to management of disease relapse.

  5. Studies on plasma production in a large volume system using multiple compact ECR plasma sources

    Science.gov (United States)

    Tarey, R. D.; Ganguli, A.; Sahu, D.; Narayanan, R.; Arora, N.

    2017-01-01

    This paper presents a scheme for large volume plasma production using multiple highly portable compact ECR plasma sources (CEPS) (Ganguli et al 2016 Plasma Source Sci. Technol. 25 025026). The large volume plasma system (LVPS) described in the paper is a scalable, cylindrical vessel of diameter  ≈1 m, consisting of source and spacer sections with multiple CEPS mounted symmetrically on the periphery of the source sections. Scaling is achieved by altering the number of source sections/the number of sources in a source section or changing the number of spacer sections for adjusting the spacing between the source sections. A series of plasma characterization experiments using argon gas were conducted on the LVPS under different configurations of CEPS, source and spacer sections, for an operating pressure in the range 0.5-20 mTorr, and a microwave power level in the range 400-500 W per source. Using Langmuir probes (LP), it was possible to show that the plasma density (~1  -  2  ×  1011 cm-3) remains fairly uniform inside the system and decreases marginally close to the chamber wall, and this uniformity increases with an increase in the number of sources. It was seen that a warm electron population (60-80 eV) is always present and is about 0.1% of the bulk plasma density. The mechanism of plasma production is discussed in light of the results obtained for a single CEPS (Ganguli et al 2016 Plasma Source Sci. Technol. 25 025026).

  6. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications

    Science.gov (United States)

    2012-12-01

    Chinese language has a large number of dialects with a resulting big influence on how people pronounce Mandarin [9]. Accents among Mandarin speakers...large open source data base of speech passages from web sites such as You Tube. 300 passages were collected in each of three languages— English ...additional native language listeners. The English and Mandarin were then forced aligned and labeled at the phonetic level using a combination of

  7. openBIS ELN-LIMS: an open-source database for academic laboratories

    OpenAIRE

    Barillari, Caterina; Ottoz, Diana S. M.; Fuentes-Serna, Juan Mariano; Ramakrishnan, Chandrasekhar; Rinn, Bernd; Rudolf, Fabian

    2015-01-01

    Summary: The open-source platform openBIS (open Biology Information System) offers an Electronic Laboratory Notebook and a Laboratory Information Management System (ELN-LIMS) solution suitable for the academic life science laboratories. openBIS ELN-LIMS allows researchers to efficiently document their work, to describe materials and methods and to collect raw and analyzed data. The system comes with a user-friendly web interface where data can be added, edited, browsed and searched. Availabil...

  8. Reading on the World Wide Web: Dealing with conflicting information from multiple sources

    NARCIS (Netherlands)

    Van Strien, Johan; Brand-Gruwel, Saskia; Boshuizen, Els

    2011-01-01

    Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. A. (2011, August). Reading on the World Wide Web: Dealing with conflicting information from multiple sources. Poster session presented at the biannual conference of the European Association for Research on Learning and Instruction, Exeter, U

  9. Organizational Communication in Emergencies: Using Multiple Channels and Sources to Combat Noise and Capture Attention

    Science.gov (United States)

    Stephens, Keri K.; Barrett, Ashley K.; Mahometa, Michael J.

    2013-01-01

    This study relies on information theory, social presence, and source credibility to uncover what best helps people grasp the urgency of an emergency. We surveyed a random sample of 1,318 organizational members who received multiple notifications about a large-scale emergency. We found that people who received 3 redundant messages coming through at…

  10. Reading on the World Wide Web: Dealing with conflicting information from multiple sources

    NARCIS (Netherlands)

    Van Strien, Johan; Brand-Gruwel, Saskia; Boshuizen, Els

    2011-01-01

    Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. A. (2011, August). Reading on the World Wide Web: Dealing with conflicting information from multiple sources. Poster session presented at the biannual conference of the European Association for Research on Learning and Instruction, Exeter, U

  11. WORKING FEATURES OF POWER SOURCE SYSTEMS – A MULTIPLE CURRENT PULSE GENERATOR

    Directory of Open Access Journals (Sweden)

    Shs.V. Argun

    2013-04-01

    Full Text Available An analysis of circuit designs as to connecting a magnetic pulse action tool to a power source has been carried out. Design features of a magnetic pulse installation control and monitoring system in a multiple current pulse mode have been revealed. The description of the control and monitoring system block diagrams has been presented.

  12. Mobility and Sector-specific Effects of Changes in Multiple Sources ...

    African Journals Online (AJOL)

    Mobility and Sector-specific Effects of Changes in Multiple Sources of Deprivation in Cameroon. ... deprivations associated with human capital and labour capital reduced, ... Further, deprivations in urban areas decreased, with the rural areas ... encourage family planning; as well as encourage employment mobility and ...

  13. Performance characteristics of white light sources consisting of multiple light-emitting diodes

    Science.gov (United States)

    Li, Yun-Li; Shah, Jay M.; Leung, P.-H.; Gessmann, Thomas; Schubert, E. F.

    2004-01-01

    The performance characteristics of white light sources based on a multiple-LED approach, in particular dichromatic and trichromatic sources are analyzed in detail. Figures of merit such as the luminous efficacy, color temperature, and color rendering capabilities are provided for a wide range of primary emission wavelengths. Spectral power density functions of LEDs are assumed to be thermally and inhomogeneously broadened to a full width at half maximum of several kT, in agreement with experimental results. A gaussian line shape is assumed for each of the emission bands. It is shown that multi-LED white light sources have the potential for luminous efficacies greater than 400 lm/W (dichromatic source) and color rendering indices of greater than 90 (trichromatic source). Contour maps for the color rendering indices and luminous efficacies versus three wavelengths are given.

  14. Isotopic evidence on multiple sources of nitrogen in the northern Jiulong River, Southeast China

    Science.gov (United States)

    Cao, Wenzhi; Huang, Zheng; Zhai, Weidong; Li, Ying; Hong, Huasheng

    2015-09-01

    Riverine export accounts for a large portion of estuarine and coastal nutrients and could lead to severe eutrophication. However, nitrogen (N) sources at the catchment scale remain unclear because of spatial and temporal variations. The stable isotope of 15N, which has been proven to be effective in deducing sources and testing biogeochemical behaviours, is applied in this study to explore multiple sources of riverine N and their nutrient concentrations in the northern section of Jiulong River catchment seasonally. Results show that drastic seasonal variation in external nitrate sources occurs in the river; manure and sewage dominate during base flows, whereas organic N in soil and atmospheric deposition dominate during storm flows. The external sources change throughout the year depending on the environmental conditions. Furthermore, riverine nitrate import in the northern Jiulong River is dominated by an external process because of the increasing NO3- - δ15N value accompanied by an increasing NO3- -N concentration (p management and eutrophication reversal.

  15. The eye in hand: predicting others' behavior by integrating multiple sources of information.

    Science.gov (United States)

    Ambrosini, Ettore; Pezzulo, Giovanni; Costantini, Marcello

    2015-04-01

    The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand. Copyright © 2015 the American Physiological Society.

  16. An Applied Framework for Incorporating Multiple Sources of Uncertainty in Fisheries Stock Assessments.

    Directory of Open Access Journals (Sweden)

    Finlay Scott

    Full Text Available Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates, model selection (e.g. choosing growth or stock assessment models and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model

  17. An Applied Framework for Incorporating Multiple Sources of Uncertainty in Fisheries Stock Assessments.

    Science.gov (United States)

    Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago

    2016-01-01

    Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to

  18. Simulation of neutron multiplicity measurements using Geant4. Open source software for nuclear arms control

    Energy Technology Data Exchange (ETDEWEB)

    Kuett, Moritz

    2016-07-07

    Nuclear arms control, including nuclear safeguards and verification technologies for nuclear disarmament typically use software as part of many different technological applications. This thesis proposes to use three open source criteria for such software, allowing users and developers to have free access to a program, have access to the full source code and be able to publish modifications for the program. This proposition is presented and analyzed in detail, together with the description of the development of ''Open Neutron Multiplicity Simulation'', an open source software tool to simulate neutron multiplicity measurements. The description includes physical background of the method, details of the developed program and a comprehensive set of validation calculations.

  19. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Hargrove, Stephanie [ORNL; Chin, Shih-Miao [ORNL; Wilson, Daniel W [ORNL; Taylor, Rob D [ORNL; Davidson, Diane [ORNL

    2016-09-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) and FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction

  20. An analytical solution for VOCs emission from multiple sources/sinks in buildings

    Institute of Scientific and Technical Information of China (English)

    DENG BaoQing; YU Bo; Chang Nyung KIM

    2008-01-01

    An analytical solution is presented to describe the emission/sorption of volatile organic compounds (VOCs) from/on multiple single-layer materials coexisting in buildings. The diffusion of VOCs within each material is described by a transient diffusion equation. All diffusion equations are coupled with each other through the equation of mass conservation in the air. The analytical solution is validated by the experimental data in literature, Compared to the one-material case, the coexistence of multiple materials may decrease the emission rate of VOCs from each material. The smaller the diffusion coef-ficient is, the more the emission rate decreases. Whether a material is a source or a sink in the case of multiple materials coexisting is not affected by the diffusion coefficient. For the case of multiple mate-rials with different partition coefficients, a material with a high partition coefficient may become a sink. This may promote the emission of VOCs from other materials.

  1. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    Science.gov (United States)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  2. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    Science.gov (United States)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2017-09-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  3. The multiplicity of 250-$\\mu$m Herschel sources in the COSMOS field

    CERN Document Server

    Scudder, Jillian M; Hurley, Peter D; Griffin, Matt; Sargent, Mark T; Scott, Douglas; Wang, Lingyu; Wardlow, Julie L

    2016-01-01

    We investigate the multiplicity of extragalactic sources detected by the Herschel Space Observatory in the COSMOS field. Using 3.6- and 24-$\\mu$m catalogues, in conjunction with 250-$\\mu$m data from Herschel, we seek to determine if a significant fraction of Herschel sources are composed of multiple components emitting at 250 $\\mu$m. We use the XID+ code, using Bayesian inference methods to produce probability distributions of the possible contributions to the observed 250-$\\mu$m flux for each potential component. The fraction of Herschel flux assigned to the brightest component is highest for sources with total 250-$\\mu$m fluxes < 45 mJy; however, the flux in the brightest component is still highest in the brightest Herschel sources. The faintest 250-$\\mu$m sources (30-45 mJy) have the majority of their flux assigned to a single bright component; the second brightest component is typically significantly weaker, and contains the remainder of the 250-$\\mu$m source flux. At the highest 250-$\\mu$m fluxes (45-...

  4. Intelligent power management in a vehicular system with multiple power sources

    Science.gov (United States)

    Murphey, Yi L.; Chen, ZhiHang; Kiliaris, Leonidas; Masrur, M. Abul

    This paper presents an optimal online power management strategy applied to a vehicular power system that contains multiple power sources and deals with largely fluctuated load requests. The optimal online power management strategy is developed using machine learning and fuzzy logic. A machine learning algorithm has been developed to learn the knowledge about minimizing power loss in a Multiple Power Sources and Loads (M_PS&LD) system. The algorithm exploits the fact that different power sources used to deliver a load request have different power losses under different vehicle states. The machine learning algorithm is developed to train an intelligent power controller, an online fuzzy power controller, FPC_MPS, that has the capability of finding combinations of power sources that minimize power losses while satisfying a given set of system and component constraints during a drive cycle. The FPC_MPS was implemented in two simulated systems, a power system of four power sources, and a vehicle system of three power sources. Experimental results show that the proposed machine learning approach combined with fuzzy control is a promising technology for intelligent vehicle power management in a M_PS&LD power system.

  5. Incident signal power comparison for localization of concurrent multiple acoustic sources.

    Science.gov (United States)

    Salvati, Daniele; Canazza, Sergio

    2014-01-01

    In this paper, a method to solve the localization of concurrent multiple acoustic sources in large open spaces is presented. The problem of the multisource localization in far-field conditions is to correctly associate the direction of arrival (DOA) estimated by a network array system to the same source. The use of systems implementing a Bayesian filter is a traditional approach to address the problem of localization in multisource acoustic scenario. However, in a real noisy open space the acoustic sources are often discontinuous with numerous short-duration events and thus the filtering methods may have difficulty to track the multiple sources. Incident signal power comparison (ISPC) is proposed to compute DOAs association. ISPC is based on identifying the incident signal power (ISP) of the sources on a microphone array using beamforming methods and comparing the ISP between different arrays using spectral distance (SD) measurement techniques. This method solves the ambiguities, due to the presence of simultaneous sources, by identifying sounds through a minimization of an error criterion on SD measures of DOA combinations. The experimental results were conducted in an outdoor real noisy environment and the ISPC performance is reported using different beamforming techniques and SD functions.

  6. Intelligent power management in a vehicular system with multiple power sources

    Energy Technology Data Exchange (ETDEWEB)

    Murphey, Yi L.; Chen, ZhiHang; Kiliaris, Leonidas [Department of Electrical and Computer Engineering at the University of Michigan-Dearborn, 4901 Evergreen Rd., Dearborn, MI 48128 (United States); Masrur, M. Abul [U.S. Army RDECOM-TARDE, Warren, MI (United States)

    2011-01-15

    This paper presents an optimal online power management strategy applied to a vehicular power system that contains multiple power sources and deals with largely fluctuated load requests. The optimal online power management strategy is developed using machine learning and fuzzy logic. A machine learning algorithm has been developed to learn the knowledge about minimizing power loss in a Multiple Power Sources and Loads (M{sub P}S and LD) system. The algorithm exploits the fact that different power sources used to deliver a load request have different power losses under different vehicle states. The machine learning algorithm is developed to train an intelligent power controller, an online fuzzy power controller, FPC{sub M}PS, that has the capability of finding combinations of power sources that minimize power losses while satisfying a given set of system and component constraints during a drive cycle. The FPC{sub M}PS was implemented in two simulated systems, a power system of four power sources, and a vehicle system of three power sources. Experimental results show that the proposed machine learning approach combined with fuzzy control is a promising technology for intelligent vehicle power management in a M{sub P}S and LD power system. (author)

  7. Incident Signal Power Comparison for Localization of Concurrent Multiple Acoustic Sources

    Directory of Open Access Journals (Sweden)

    Daniele Salvati

    2014-01-01

    Full Text Available In this paper, a method to solve the localization of concurrent multiple acoustic sources in large open spaces is presented. The problem of the multisource localization in far-field conditions is to correctly associate the direction of arrival (DOA estimated by a network array system to the same source. The use of systems implementing a Bayesian filter is a traditional approach to address the problem of localization in multisource acoustic scenario. However, in a real noisy open space the acoustic sources are often discontinuous with numerous short-duration events and thus the filtering methods may have difficulty to track the multiple sources. Incident signal power comparison (ISPC is proposed to compute DOAs association. ISPC is based on identifying the incident signal power (ISP of the sources on a microphone array using beamforming methods and comparing the ISP between different arrays using spectral distance (SD measurement techniques. This method solves the ambiguities, due to the presence of simultaneous sources, by identifying sounds through a minimization of an error criterion on SD measures of DOA combinations. The experimental results were conducted in an outdoor real noisy environment and the ISPC performance is reported using different beamforming techniques and SD functions.

  8. Detecting and accounting for multiple sources of positional variance in peak list registration analysis and spin system grouping.

    Science.gov (United States)

    Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B

    2017-08-16

    Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the

  9. Preferred sources of health information in persons with multiple sclerosis: degree of trust and information sought.

    Science.gov (United States)

    Marrie, Ruth Ann; Salter, Amber R; Tyry, Tuula; Fox, Robert J; Cutter, Gary R

    2013-03-17

    Effective health communication is important for informed decision-making, yet little is known about the range of information sources used by persons with multiple sclerosis (MS), the perceived trust in those information sources, or how this might vary according to patient characteristics. We aimed to investigate the sources of health information used by persons with MS, their preferences for the source of health information, and levels of trust in those information sources. We also aimed to evaluate how these findings varied according to participant characteristics. In 2011, participants in the North American Research Committee on Multiple Sclerosis (NARCOMS) Registry were asked about their sources of health information using selected questions adapted from the 2007 Health Information National Trends (HINTS) survey. Of 12,974 eligible participants, 66.18% (8586/12,974) completed the questionnaire. Mass media sources, rather than interpersonal information sources, were the first sources used by 83.22% (5953/7153) of participants for general health topics and by 68.31% (5026/7357) of participants for MS concerns. Specifically, the Internet was the first source of health information for general health issues (5332/7267, 73.40%) and MS (4369/7376, 59.23%). In a logistic regression model, younger age, less disability, and higher annual income were independently associated with increased odds of use of mass media rather than interpersonal sources of information first. The most trusted information source was a physician, with 97.94% (8318/8493) reporting that they trusted a physician some or a lot. Information sought included treatment for MS (4470/5663, 78.93%), general information about MS (3378/5405, 62.50%), paying for medical care (1096/4282, 25.59%), where to get medical care (787/4282, 18.38%), and supports for coping with MS (2775/5031, 55.16%). Nearly 40% (2998/7521) of participants had concerns about the quality of the information they gathered. Although

  10. Treatment patterns and health care resource utilization associated with dalfampridine extended release in multiple sclerosis: a retrospective claims database analysis

    Directory of Open Access Journals (Sweden)

    Guo A

    2016-05-01

    Full Text Available Amy Guo,1 Michael Grabner,2 Swetha Rao Palli,2 Jessica Elder,1 Matthew Sidovar,1 Peter Aupperle,1 Stephen Krieger3 1Acorda Therapeutics Inc., Ardsley, New York, NY, USA; 2HealthCore Inc., Wilmington, DE, USA; 3Corinne Goldsmith Dickinson Center for MS, Icahn School of Medicine at Mount Sinai, New York, NY, USA Background: Although previous studies have demonstrated the clinical benefits of dalfampridine extended release (D-ER tablets in patients with multiple sclerosis (MS, there are limited real-world data on D-ER utilization and associated outcomes in patients with MS. Purpose: The objective of this study was to evaluate treatment patterns, budget impact, and health care resource utilization (HRU associated with D-ER use in a real-world setting. Methods: A retrospective claims database analysis was conducted using the HealthCore Integrated Research DatabaseSM. Adherence (measured by medication possession ratio, or [MPR] and persistence (measured by days between initial D-ER claim and discontinuation or end of follow-up were evaluated over 1-year follow-up. Budget impact was calculated as cost per member per month (PMPM over the available follow-up period. D-ER and control cohorts were propensity-score matched on baseline demographics, comorbidities, and MS-related resource utilization to compare walking-impairment-related HRU over follow-up. Results: Of the 2,138 MS patients identified, 1,200 were not treated with D-ER (control and 938 were treated with D-ER. Patients were aged 51 years on average and 74% female. Approximately 82.6% of D-ER patients were adherent (MPR >80%. The estimated budget impact range of D-ER was $0.014–$0.026 PMPM. Propensity-score-matched D-ER and controls yielded 479 patients in each cohort. Postmatching comparison showed that the D-ER cohort was associated with fewer physician (21.5% vs 62.4%, P<0.0001 and other outpatient visits (22.8% vs 51.4%, P<0.0001 over the 12-month follow-up. Changes in HRU from follow

  11. Assimilation and contrast in persuasion: the effects of source credibility in multiple message situations.

    Science.gov (United States)

    Tormala, Zakary L; Clarkson, Joshua J

    2007-04-01

    The present research explores a contextual perspective on persuasion in multiple message situations. It is proposed that when people receive persuasive messages, the effects of those messages are influenced by other messages to which people recently have been exposed. In two experiments, participants received a target persuasive message from a moderately credible source. Immediately before this message, participants received another message, on a different topic, from a source with high or low credibility. In Experiment 1, participants' attitudes toward the target issue were more favorable after they had first been exposed to a different message from a low rather than high credibility source (contrast). In Experiment 2, this effect only emerged when a priming manipulation gave participants a dissimilarity mindset. When participants were primed with a similarity mindset, their attitudes toward the target issue were more favorable following a different message from a high rather than low credibility source (assimilation).

  12. A method to suppress spurious multiples in virtual-source gathers retrieved using seismic interferometry with reflection data

    NARCIS (Netherlands)

    Boullenger, B.; Wapenaar, C.P.A.; Draganov, D.S.

    2014-01-01

    Seismic interferometry applied to surface reflection data (with source and receivers at the surface) allows to retrieve virtual-source gathers at the position of receivers, where no source was shot. As a result of the crosscorrelation of all primary and multiple reflections, the virtual-source gathe

  13. Luminosity Functions and Point Source Properties from Multiple Chandra Observations of M81

    CERN Document Server

    Sell, P H; Zezas, A; Heinz, S; Homan, J; Lewin, W H G

    2011-01-01

    We present an analysis of 15 Chandra observations of the nearby spiral galaxy M81 taken over the course of six weeks in May--July 2005. Each observation reaches a sensitivity of ~10^37 erg/s. With these observations and one previous deeper Chandra observation, we compile a master source list of 265 point sources, extract and fit their spectra, and differentiate basic populations of sources through their colors. We also carry out variability analyses of individual point sources and of X-ray luminosity functions in multiple regions of M81 on timescales of days, months, and years. We find that, despite measuring significant variability in a considerable fraction of sources, snapshot observations provide a consistent determination of the X-ray luminosity function of M81. We also fit the X-ray luminosity functions for multiple regions of M81 and, using common parametrizations, compare these luminosity functions to those of two other spiral galaxies, M31 and the Milky Way.

  14. Combining Multiple Algorithms for Road Network Tracking from Multiple Source Remotely Sensed Imagery: a Practical System and Performance Evaluation

    Science.gov (United States)

    Lin, Xiangguo; Liu, Zhengjun; Zhang, Jixian; Shen, Jing

    2009-01-01

    In light of the increasing availability of commercial high-resolution imaging sensors, automatic interpretation tools are needed to extract road features. Currently, many approaches for road extraction are available, but it is acknowledged that there is no single method that would be successful in extracting all types of roads from any remotely sensed imagery. In this paper, a novel classification of roads is proposed, based on both the roads' geometrical, radiometric properties and the characteristics of the sensors. Subsequently, a general road tracking framework is proposed, and one or more suitable road trackers are designed or combined for each type of roads. Extensive experiments are performed to extract roads from aerial/satellite imagery, and the results show that a combination strategy can automatically extract more than 60% of the total roads from very high resolution imagery such as QuickBird and DMC images, with a time-saving of approximately 20%, and acceptable spatial accuracy. It is proven that a combination of multiple algorithms is more reliable, more efficient and more robust for extracting road networks from multiple-source remotely sensed imagery than the individual algorithms. PMID:22399965

  15. Use of Multiple DC Magnetron Deposition Sources for Uniform Coating of Large Areas (Preprint)

    Science.gov (United States)

    2009-06-01

    2005- 1 June 2009 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9451-04-C-0067 DF297548 Use of multiple DC magnetron deposition sources for...thickness at some point on the substrate plane to yield a relative thickness distribution or it can be used to find the ratio Mlm which will be useful... Mlm of the material deposited in each area, is shown in columns 3 though 5, for the three sources.. For example, within the area from the center of the

  16. Source location in plates based on the multiple sensors array method and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  17. “You can know an analysis by the source of its data”. A conceptual and methodological review of databases on armed conflict in Colombia

    Directory of Open Access Journals (Sweden)

    Nicolás Espinosa M

    2011-07-01

    Full Text Available In Colombia, there are quantitative databases on armed conflict, some of them public and open-access. This paper reviews the main theoretical and methodological traits in some of those databases and their main features when defining and measuring political violence. Based on the results of a research performed in a Colombian region, several methodological issues will be presented in order to process —statistically, spatially and cartographically― the information delivered by those databases. After the exposition, several procedures are described, which allowed to measure differences among databases. Those contents are addressed aiming to show the consequences that using one or another database has in the quantitative analysis of armed conflict, since the results of those analyses will depend on the source used.

  18. Multiple Factor Analysis and k-Means Clustering-Based Classification of the DOE Groundwater Contaminant Database

    Science.gov (United States)

    Faybishenko, B.; Hazen, T. C.

    2009-12-01

    A proper classification of the plume characteristics is critical for selecting the most suitable characterization, monitoring, and remediation technologies. To perform a statistical analysis of the different groundwater plume characteristics, we used the DOE Groundwater Database, including 221 groundwater plumes located at 60 DOE sites. To classify the plume characteristics, we used a multiple factor analysis (MFA), including a principal component analysis (PCA) of quantitative plume characteristics and a multiple correspondence analysis (MCA) of qualitative plume characteristics. The input parameters used for the statistical analysis are: the presence of eight types of contaminant groups—chlorinated hydrocarbons, fuels, explosives, sulfates, nitrates, metals, tritium, and radioisotopes; a number and associations of contaminant groups; a contamination severity index (based on the association of contaminant groups and complexity of remediation); contaminant mass and plume volumes; groundwater depth and velocities; and climatic conditions. The input variables are also partitioned into the active and supplementary plume characteristics. Statistical results include the evaluation of the correlation matrix between the groups of variables and individual plume characteristics. From the results of the MFA, the first four factors can be used to describe the variability of the basic plume characteristics. The contaminant severity index and the number of contaminant groups provide a major contribution to the 1st factor; the types of contaminant groups and carbon tetrachloride concentrations provide the major contribution to the 2nd factor. The contribution of the supplementary data (climate and plume depth and velocity) is insignificant. The presence of radioactive contaminants is mostly related to the 1st factor; the presence of sulfates, and to a lesser degree the presence of nitrates and metals, is related to the 2nd factor. The strongest relationship is, as expected

  19. The diurnal cycle of clouds and precipitation : an evaluation of multiple data sources

    OpenAIRE

    Pfeifroth, Uwe Anton

    2016-01-01

    Clouds and precipitation are essential climate variables. Because of their high spatial and temporal variability, their observation and modeling is difficult. In this thesis multiple observational data sources, including satellite data and station data are globally analyzed to understand the distribution and variability of clouds and precipitation, while a special focus is on the diurnal cycle of both variables. Substantial diurnal cycles of clouds and precipitation are observed in the tropic...

  20. Analysis of magnetic source localization of P300 using the MUSIC (multiple signal classification) algorithm

    OpenAIRE

    魚橋, 哲夫

    2006-01-01

    The authors studied the localization of P300 magnetic sources using the multiple signal classification (MUSIC) algorithm. Six healthy subjects (aged 24–34 years old) were investigated with 148-channel whole-head type magnetencephalography using an auditory oddball paradigm in passive mode. The authors also compared six stimulus combinations in order to find the optimal stimulus parameters for P300 magnetic field (P300m) in passive mode. Bilateral MUSIC peaks were located on the mesial tempora...

  1. A hierarchical model for optimal supplier selection in multiple sourcing contexts

    OpenAIRE

    Dotoli, Mariagrazia; Falagario, Marco

    2011-01-01

    Abstract The paper addresses a crucial objective of the strategic purchasing function in supply chains, i.e., optimal supplier selection. We present a hierarchical extension of the Data Envelopment Analysis (DEA), the most widespread method for supplier rating in the literature, for application in a multiple sourcing strategy context. The proposed hierarchical technique is based on three levels. First, a modified DEA approach is used to evaluate the efficiency of each supplier acco...

  2. Inverse Estimation of Localized Sources in Multiple Atmospheric Release Scenario Using Mixture of Concentration Measurements

    Science.gov (United States)

    Singh, S. K.; Kumar, P.; Turbelin, G.; Issartel, J. P.; Feiz, A. A.; Ngae, P.; Bekka, N.

    2016-12-01

    In accidental release scenarios, a reliable prediction of origin and strength of unknown releases is attentive for emergency response authorities in order to ensure safety and security towards human health and environment. The accidental scenarios might involve one or more simultaneous releases emitting the same contaminant. In this case, the field of plumes may overlap significantly and the sampled concentrations may become the mixture of the concentrations originating from all the releases. The study addresses an inverse modelling procedure for identifying the origin and strength of known number of simultaneous releases from the sampled mixture of concentrations. A two-step inversion algorithm is developed in conjunction with an adjoint representation of source-receptor relationship. The computational efficiency is increased by deriving the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from Fusion Field Trials, involving multiple (two, three and four sources) release experiments emitting Propylene, in September 2007 at Dugway Proving Ground, Utah, USA. The release locations are retrieved, on average, within 45 m to the true sources. The analysis of posterior uncertainties shows that the variations in location error and retrieved strength are within 10 m and 0.07%, respectively. Further, the inverse modelling is tested using 4-16 measurements in retrieval of four releases and found to be working reasonably well (within 146±79 m). The sensitivity studies highlight that the covariance statistics, model representativeness errors, source-receptor distance, distance between localized sources, monitoring design and number of measurements plays an important role in multiple source estimation.

  3. Gas Production Strategy of Underground Coal Gasification Based on Multiple Gas Sources

    Directory of Open Access Journals (Sweden)

    Duan Tianhong

    2014-01-01

    Full Text Available To lower stability requirement of gas production in UCG (underground coal gasification, create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method.

  4. Gas production strategy of underground coal gasification based on multiple gas sources.

    Science.gov (United States)

    Tianhong, Duan; Zuotang, Wang; Limin, Zhou; Dongdong, Li

    2014-01-01

    To lower stability requirement of gas production in UCG (underground coal gasification), create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method.

  5. Investigating Multiple Household Water Sources and Uses with a Computer-Assisted Personal Interviewing (CAPI Survey

    Directory of Open Access Journals (Sweden)

    Morgan C. MacDonald

    2016-12-01

    Full Text Available The investigation of multiple sources in household water management is considered overly complicated and time consuming using paper and pen interviewing (PAPI. We assess the advantages of computer-assisted personal interviewing (CAPI in Pacific Island Countries (PICs. We adapted an existing PAPI survey on multiple water sources and expanded it to incorporate location of water use and the impacts of extreme weather events using SurveyCTO on Android tablets. We then compared the efficiency and accuracy of data collection using the PAPI version (n = 44 with the CAPI version (n = 291, including interview duration, error rate and trends in interview duration with enumerator experience. CAPI surveys facilitated high-quality data collection and were an average of 15.2 min faster than PAPI. CAPI survey duration decreased by 0.55% per survey delivered (p < 0.0001, whilst embedded skip patterns and answer lists lowered data entry error rates, relative to PAPI (p < 0.0001. Large-scale household surveys commonly used in global monitoring and evaluation do not differentiate multiple water sources and uses. CAPI equips water researchers with a quick and reliable tool to address these knowledge gaps and advance our understanding of development research priorities.

  6. Gas Production Strategy of Underground Coal Gasification Based on Multiple Gas Sources

    Science.gov (United States)

    Tianhong, Duan; Zuotang, Wang; Limin, Zhou; Dongdong, Li

    2014-01-01

    To lower stability requirement of gas production in UCG (underground coal gasification), create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method. PMID:25114953

  7. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    Science.gov (United States)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This

  8. Managing Multiple Sources of Competitive Advantage in a Complex Competitive Environment

    Directory of Open Access Journals (Sweden)

    Alexandre Howard Henry Lapersonne

    2013-12-01

    Full Text Available The aim of this article is to review the literature on the topic of sustained and temporary competitive advantage creation, specifically in dynamic markets, and to propose further research possibilities. After having analyzed the main trends and scholars’ works on the subject, it was concluded that a firm which has been experiencing erosion of its core sources of economic rent generation, should have diversified its strategy portfolio in a search for new sources of competitive advantage, ones that could compensate for the decline of profits provoked by intensive competitive environments. This review concludes with the hypothesis that firms, who have decided to enter and manage multiple competitive environments, should have developed a multiple strategies framework approach. The management of this source of competitive advantage portfolio should have allowed persistence of a firm’s superior economic performance through the management of diverse temporary advantages lifecycle and through a resilient effect, where a very successful source of competitive advantage compensates the ones that have been eroded. Additionally, the review indicates that economies of emerging countries, such as the ones from the BRIC block, should present a more complex competitive environment due to their historical nature of cultural diversity, social contrasts and frequent economic disruption, and also because of recent institutional normalization that has turned the market into hypercompetition. Consequently, the study of complex competition should be appropriate in such environments.

  9. Inference of emission rates from multiple sources using Bayesian probability theory.

    Science.gov (United States)

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  10. A rainfall design method for spatial flood risk assessment: considering multiple flood sources

    Science.gov (United States)

    Jiang, X.; Tatano, H.

    2015-08-01

    Information about the spatial distribution of flood risk is important for integrated urban flood risk management. Focusing on urban areas, spatial flood risk assessment must reflect all risk information derived from multiple flood sources: rivers, drainage, coastal flooding etc. that may affect the area. However, conventional flood risk assessment deals with each flood source independently, which leads to an underestimation of flood risk in the floodplain. Even in floodplains that have no risk from coastal flooding, flooding from river channels and inundation caused by insufficient drainage capacity should be considered simultaneously. For integrated flood risk management, it is necessary to establish a methodology to estimate flood risk distribution across a floodplain. In this paper, a rainfall design method for spatial flood risk assessment, which considers the joint effects of multiple flood sources, is proposed. The concept of critical rainfall duration determined by the concentration time of flooding is introduced to connect response characteristics of different flood sources with rainfall. A copula method is then adopted to capture the correlation of rainfall amount with different critical rainfall durations. Rainfall events are designed taking advantage of the copula structure of correlation and marginal distribution of rainfall amounts within different critical rainfall durations. A case study in the Otsu River Basin, Osaka prefecture, Japan was conducted to demonstrate this methodology.

  11. A rainfall design method for spatial flood risk assessment: considering multiple flood sources

    Directory of Open Access Journals (Sweden)

    X. Jiang

    2015-08-01

    Full Text Available Information about the spatial distribution of flood risk is important for integrated urban flood risk management. Focusing on urban areas, spatial flood risk assessment must reflect all risk information derived from multiple flood sources: rivers, drainage, coastal flooding etc. that may affect the area. However, conventional flood risk assessment deals with each flood source independently, which leads to an underestimation of flood risk in the floodplain. Even in floodplains that have no risk from coastal flooding, flooding from river channels and inundation caused by insufficient drainage capacity should be considered simultaneously. For integrated flood risk management, it is necessary to establish a methodology to estimate flood risk distribution across a floodplain. In this paper, a rainfall design method for spatial flood risk assessment, which considers the joint effects of multiple flood sources, is proposed. The concept of critical rainfall duration determined by the concentration time of flooding is introduced to connect response characteristics of different flood sources with rainfall. A copula method is then adopted to capture the correlation of rainfall amount with different critical rainfall durations. Rainfall events are designed taking advantage of the copula structure of correlation and marginal distribution of rainfall amounts within different critical rainfall durations. A case study in the Otsu River Basin, Osaka prefecture, Japan was conducted to demonstrate this methodology.

  12. Subspace electrode selection methodology for EEG multiple source localization error reduction due to uncertain conductivity values.

    Science.gov (United States)

    Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger

    2013-01-01

    This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio.

  13. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Science.gov (United States)

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Relapse rates in patients with multiple sclerosis switching from interferon to fingolimod or glatiramer acetate: a US claims database study.

    Directory of Open Access Journals (Sweden)

    Niklas Bergvall

    Full Text Available BACKGROUND: Approximately one-third of patients with multiple sclerosis (MS are unresponsive to, or intolerant of, interferon (IFN therapy, prompting a switch to other disease-modifying therapies. Clinical outcomes of switching therapy are unknown. This retrospective study assessed differences in relapse rates among patients with MS switching from IFN to fingolimod or glatiramer acetate (GA in a real-world setting. METHODS: US administrative claims data from the PharMetrics Plus™ database were used to identify patients with MS who switched from IFN to fingolimod or GA between October 1, 2010 and March 31, 2012. Patients were matched 1∶1 using propensity scores within strata (number of pre-index relapses on demographic (e.g. age and gender and disease (e.g. timing of pre-index relapse, comorbidities and symptoms characteristics. A claims-based algorithm was used to identify relapses while patients were persistent with therapy over 360 days post-switch. Differences in both the probability of experiencing a relapse and the annualized relapse rate (ARR while persistent with therapy were assessed. RESULTS: The matched sample population contained 264 patients (n = 132 in each cohort. Before switching, 33.3% of patients in both cohorts had experienced at least one relapse. During the post-index persistence period, the proportion of patients with at least one relapse was lower in the fingolimod cohort (12.9% than in the GA cohort (25.0%, and ARRs were lower with fingolimod (0.19 than with GA (0.51. Patients treated with fingolimod had a 59% lower probability of relapse (odds ratio, 0.41; 95% confidence interval [CI], 0.21-0.80; p = 0.0091 and 62% fewer relapses per year (rate ratio, 0.38; 95% CI, 0.21-0.68; p = 0.0013 compared with those treated with GA. CONCLUSIONS: In a real-world setting, patients with MS who switched from IFNs to fingolimod were significantly less likely to experience relapses than those who switched to GA.

  15. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    Science.gov (United States)

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics

  16. Interpolating between random walks and optimal transportation routes: Flow with multiple sources and targets

    Science.gov (United States)

    Guex, Guillaume

    2016-05-01

    In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.

  17. View-Aware Image Object Compositing and Synthesis from Multiple Sources

    Institute of Scientific and Technical Information of China (English)

    Xiang Chen; Wei-Wei Xu; Sai-Kit Yeung; Kun Zhou

    2016-01-01

    Image compositing is widely used to combine visual elements from separate source images into a single image. Although recent image compositing techniques are capable of achieving smooth blending of the visual elements from different sources, most of them implicitly assume the source images are taken in the same viewpoint. In this paper, we present an approach to compositing novel image objects from multiple source images which have different viewpoints. Our key idea is to construct 3D proxies for meaningful components of the source image objects, and use these 3D component proxies to warp and seamlessly merge components together in the same viewpoint. To realize this idea, we introduce a coordinate-frame based single-view camera calibration algorithm to handle general types of image objects, a structure-aware cuboid optimization algorithm to get the cuboid proxies for image object components with correct structure relationship, and finally a 3D-proxy transformation guided image warping algorithm to stitch object components. We further describe a novel application based on this compositing approach to automatically synthesize a large number of image objects from a set of exemplars. Experimental results show that our compositing approach can be applied to a variety of image objects, such as chairs, cups, lamps, and robots, and the synthesis application can create novel image objects with significant shape and style variations from a small set of exemplars.

  18. A factor analysis-multiple regression model for source apportionment of suspended particulate matter

    Science.gov (United States)

    Okamoto, Shin'ichi; Hayashi, Masayuki; Nakajima, Masaomi; Kainuma, Yasutaka; Shiozawa, Kiyoshige

    A factor analysis-multiple regression (FA-MR) model has been used for a source apportionment study in the Tokyo metropolitan area. By a varimax rotated factor analysis, five source types could be identified: refuse incineration, soil and automobile, secondary particles, sea salt and steel mill. Quantitative estimations using the FA-MR model corresponded to the calculated contributing concentrations determined by using a weighted least-squares CMB model. However, the source type of refuse incineration identified by the FA-MR model was similar to that of biomass burning, rather than that produced by an incineration plant. The estimated contributions of sea salt and steel mill by the FA-MR model contained those of other sources, which have the same temporal variation of contributing concentrations. This symptom was caused by a multicollinearity problem. Although this result shows the limitation of the multivariate receptor model, it gives useful information concerning source types and their distribution by comparing with the results of the CMB model. In the Tokyo metropolitan area, the contributions from soil (including road dust), automobile, secondary particles and refuse incineration (biomass burning) were larger than industrial contributions: fuel oil combustion and steel mill. However, since vanadium is highly correlated with SO 42- and other secondary particle related elements, a major portion of secondary particles is considered to be related to fuel oil combustion.

  19. CoCoTools: open-source software for building connectomes using the CoCoMac anatomical database.

    Science.gov (United States)

    Blumenfeld, Robert S; Bliss, Daniel P; Perez, Fernando; D'Esposito, Mark

    2014-04-01

    Neuroanatomical tracer studies in the nonhuman primate macaque monkey are a valuable resource for cognitive neuroscience research. These data ground theories of cognitive function in anatomy, and with the emergence of graph theoretical analyses in neuroscience, there is high demand for these data to be consolidated into large-scale connection matrices ("macroconnectomes"). Because manual review of the anatomical literature is time consuming and error prone, computational solutions are needed to accomplish this task. Here we describe the "CoCoTools" open-source Python library, which automates collection and integration of macaque connectivity data for visualization and graph theory analysis. CoCoTools both interfaces with the CoCoMac database, which houses a vast amount of annotated tracer results from 100 years (1905-2005) of neuroanatomical research, and implements coordinate-free registration algorithms, which allow studies that use different parcellations of the brain to be translated into a single graph. We show that using CoCoTools to translate all of the data stored in CoCoMac produces graphs with properties consistent with what is known about global brain organization. Moreover, in addition to describing CoCoTools' processing pipeline, we provide worked examples, tutorials, links to on-line documentation, and detailed appendices to aid scientists interested in using CoCoTools to gather and analyze CoCoMac data.

  20. Application of evidence theory in information fusion of multiple sources in bayesian analysis

    Institute of Scientific and Technical Information of China (English)

    周忠宝; 蒋平; 武小悦

    2004-01-01

    How to obtain proper prior distribution is one of the most critical problems in Bayesian analysis. In many practical cases, the prior information often comes from different sources, and the prior distribution form could be easily known in some certain way while the parameters are hard to determine. In this paper, based on the evidence theory, a new method is presented to fuse the information of multiple sources and determine the parameters of the prior distribution when the form is known. By taking the prior distributions which result from the information of multiple sources and converting them into corresponding mass functions which can be combined by Dempster-Shafer (D-S) method, we get the combined mass function and the representative points of the prior distribution. These points are used to fit with the given distribution form to determine the parameters of the prior distrbution. And then the fused prior distribution is obtained and Bayesian analysis can be performed.How to convert the prior distributions into mass functions properly and get the representative points of the fused prior distribution is the central question we address in this paper. The simulation example shows that the proposed method is effective.

  1. Evaluation of multiple-sphere head models for MEG source localization

    Energy Technology Data Exchange (ETDEWEB)

    Lalancette, M; Cheyne, D [Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario M5G 1X8 (Canada); Quraan, M, E-mail: marc.lalancette@sickkids.ca, E-mail: douglas.cheyne@utoronto.ca [Krembil Neuroscience Centre, Toronto Western Research Institute, University Health Network, Toronto, Ontario M5T 2S8 (Canada)

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  2. Glutathione provides a source of cysteine essential for intracellular multiplication of Francisella tularensis.

    Directory of Open Access Journals (Sweden)

    Khaled Alkhuder

    2009-01-01

    Full Text Available Francisella tularensis is a highly infectious bacterium causing the zoonotic disease tularemia. Its ability to multiply and survive in macrophages is critical for its virulence. By screening a bank of HimarFT transposon mutants of the F. tularensis live vaccine strain (LVS to isolate intracellular growth-deficient mutants, we selected one mutant in a gene encoding a putative gamma-glutamyl transpeptidase (GGT. This gene (FTL_0766 was hence designated ggt. The mutant strain showed impaired intracellular multiplication and was strongly attenuated for virulence in mice. Here we present evidence that the GGT activity of F. tularensis allows utilization of glutathione (GSH, gamma-glutamyl-cysteinyl-glycine and gamma-glutamyl-cysteine dipeptide as cysteine sources to ensure intracellular growth. This is the first demonstration of the essential role of a nutrient acquisition system in the intracellular multiplication of F. tularensis. GSH is the most abundant source of cysteine in the host cytosol. Thus, the capacity this intracellular bacterial pathogen has evolved to utilize the available GSH, as a source of cysteine in the host cytosol, constitutes a paradigm of bacteria-host adaptation.

  3. Evaluation of multiple-sphere head models for MEG source localization.

    Science.gov (United States)

    Lalancette, M; Quraan, M; Cheyne, D

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  4. The impact of different sources of body mass index assessment on smoking onset: An application of multiple-source information models.

    Science.gov (United States)

    Caria, Maria Paola; Bellocco, Rino; Galanti, Maria Rosaria; Horton, Nicholas J

    2011-01-01

    Multiple-source data are often collected to provide better information of some underlying construct that is difficult to measure or likely to be missing. In this article, we describe regression-based methods for analyzing multiple-source data in Stata. We use data from the BROMS Cohort Study, a cohort of Swedish adolescents who collected data on body mass index that was self-reported and that was measured by nurses. We draw together into a single frame of reference both source reports and relate these to smoking onset. This unified method has two advantages over traditional approaches: 1) the relative predictiveness of each source can be assessed and 2) all subjects contribute to the analysis. The methods are applicable to other areas of epidemiology where multiple-source reports are used.

  5. Reducing the probability of false positive research findings by pre-publication validation – Experience with a large multiple sclerosis database

    Directory of Open Access Journals (Sweden)

    Heinz Moritz

    2008-04-01

    Full Text Available Abstract Background Published false positive research findings are a major problem in the process of scientific discovery. There is a high rate of lack of replication of results in clinical research in general, multiple sclerosis research being no exception. Our aim was to develop and implement a policy that reduces the probability of publishing false positive research findings. We have assessed the utility to work with a pre-publication validation policy after several years of research in the context of a large multiple sclerosis database. Methods The large database of the Sylvia Lawry Centre for Multiple Sclerosis Research was split in two parts: one for hypothesis generation and a validation part for confirmation of selected results. We present case studies from 5 finalized projects that have used the validation policy and results from a simulation study. Results In one project, the "relapse and disability" project as described in section II (example 3, findings could not be confirmed in the validation part of the database. The simulation study showed that the percentage of false positive findings can exceed 20% depending on variable selection. Conclusion We conclude that the validation policy has prevented the publication of at least one research finding that could not be validated in an independent data set (and probably would have been a "true" false-positive finding over the past three years, and has led to improved data analysis, statistical programming, and selection of hypotheses. The advantages outweigh the lost statistical power inherent in the process.

  6. Computational determination of absorbed dose distributions from multiple volumetric gamma ray sources

    Science.gov (United States)

    Zhou, Chuanyu; Inanc, Feyzi

    2002-05-01

    Determination of absorbed dose distributions is very important in brachytherapy procedures. The typical computation involves superposition of absorbed dose distributions from a single seed to compute the combined absorbed dose distribution formed by multiple seeds. This approach does not account for the shadow effect caused by the metallic nature of volumetric radioactive seeds. Since this shadow effect will cause deviations from the targeted dose distribution, it may have important implications on the success of the procedures. We demonstrated accuracy of our deterministic algorithms for isotropic point sources in the past. We will show that we now have the capability of computing absorbed dose distributions from multiple volumetric seeds and demonstrate that our results are quite close to the results published in the literature.

  7. Three Dimensional Electromagnetic Inversion Based on Electric Dipole Source Multiple Locations Excitation

    Directory of Open Access Journals (Sweden)

    Jianping Li

    2013-07-01

    Full Text Available In this paper, we use integral equation and damped least-squares method to invert three dimensional abnormal body's electromagnetic field through horizontal electric dipole source multiple locations excitation. Multiple groups  electromagnetic field data in different excitation and receiving points to be uniform consideration in once inversion, the Jacobian matrix is obtained and divided into linear terms and nonlinear terms. At last, we use the forward simulation data fit the measured data, and gradually modify geoelectricity model parameter values, ultimately achieve optimal fitting, gain three dimensional abnormal body's resistivity. Model test shows that the inversion algorithm has a fast convergence speed, less dependents on the initial value; the inversion result is accurate and reliable. It is an effective solution to the inversion failure caused by insufficient amount of data.

  8. GYRE: An open-source stellar oscillation code based on a new Magnus Multiple Shooting Scheme

    CERN Document Server

    Townsend, R H D

    2013-01-01

    We present a new oscillation code, GYRE, which solves the stellar pulsation equations (both adiabatic and non-adiabatic) using a novel Magnus Multiple Shooting numerical scheme devised to overcome certain weaknesses of the usual relaxation and shooting schemes appearing in the literature. The code is accurate (up to 6th order in the number of grid points), robust, efficiently makes use of multiple processor cores and/or nodes, and is freely available in source form for use and distribution. We verify the code against analytic solutions and results from other oscillation codes, in all cases finding good agreement. Then, we use the code to explore how the asteroseismic observables of a 1.5 M_sun star change as it evolves through the red-giant bump.

  9. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  10. Swept-Source OCT Angiography Shows Sparing of the Choriocapillaris in Multiple Evanescent White Dot Syndrome.

    Science.gov (United States)

    Yannuzzi, Nicolas A; Swaminathan, Swarup S; Zheng, Fang; Miller, Andrew; Gregori, Giovanni; Davis, Janet L; Rosenfeld, Philip J

    2017-01-01

    Two women with unilateral vision loss from multiple evanescent white dot syndrome were imaged serially with swept-source optical coherence tomography (SS-OCT). En face wide-field structural images revealed peripapillary outer photoreceptor disruption better than conventional fundus autofluorescence imaging. OCT angiography (OCTA) imaging showed preservation of flow within the retinal vasculature and choriocapillaris. As OCTA imaging of the choriocapillaris continues to evolve, these images may lay the groundwork for future investigation. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:69-74.].

  11. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    Science.gov (United States)

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  12. Joint source based analysis of multiple brain structures in studying major depressive disorder

    Science.gov (United States)

    Ramezani, Mahdi; Rasoulian, Abtin; Hollenstein, Tom; Harkness, Kate; Johnsrude, Ingrid; Abolmaesumi, Purang

    2014-03-01

    We propose a joint Source-Based Analysis (jSBA) framework to identify brain structural variations in patients with Major Depressive Disorder (MDD). In this framework, features representing position, orientation and size (i.e. pose), shape, and local tissue composition are extracted. Subsequently, simultaneous analysis of these features within a joint analysis method is performed to generate the basis sources that show signi cant di erences between subjects with MDD and those in healthy control. Moreover, in a cross-validation leave- one-out experiment, we use a Fisher Linear Discriminant (FLD) classi er to identify individuals within the MDD group. Results show that we can classify the MDD subjects with an accuracy of 76% solely based on the information gathered from the joint analysis of pose, shape, and tissue composition in multiple brain structures.

  13. Proceedings of the 4th MultiClust Workshop on Multiple Clusterings, Multi-view Data, and Multi-source Knowledge-driven Clustering

    DEFF Research Database (Denmark)

    Cluster detection is a very traditional data analysis task with several decades of research. However, it also includes a large variety of different subtopics investigated by different communities such as data mining, machine learning, statistics, and database systems. "Multiple Clusterings, Multi......-view Data, and Multi-source Knowledge-driven Clustering" names several challenges around clustering: making sense or even making use of many, possibly redundant clustering results, of different representations and properties of data, of different sources of knowledge. Approaches such as ensemble clustering......, semi-supervised clustering, subspace clustering meet around these problems. Yet they tackle these problems with different backgrounds, focus on different details, and include ideas from different research communities. This diversity is a major potential for this emerging field and should be highlighted...

  14. Researching the mental health needs of hard-to-reach groups: managing multiple sources of evidence

    Directory of Open Access Journals (Sweden)

    Lamb Jonathan

    2009-12-01

    Full Text Available Abstract Background Common mental health problems impose substantial challenges to patients, carers, and health care systems. A range of interventions have demonstrable efficacy in improving the lives of people experiencing such problems. However many people are disadvantaged, either because they are unable to access primary care, or because access does not lead to adequate help. New methods are needed to understand the problems of access and generate solutions. In this paper we describe our methodological approach to managing multiple and diverse sources of evidence, within a research programme to increase equity of access to high quality mental health services in primary care. Methods We began with a scoping review to identify the range and extent of relevant published material, and establish key concepts related to access. We then devised a strategy to collect - in parallel - evidence from six separate sources: a systematic review of published quantitative data on access-related studies; a meta-synthesis of published qualitative data on patient perspectives; dialogues with local stakeholders; a review of grey literature from statutory and voluntary service providers; secondary analysis of patient transcripts from previous qualitative studies; and primary data from interviews with service users and carers. We synthesised the findings from these diverse sources, made judgements on key emerging issues in relation to needs and services, and proposed a range of potential interventions. These proposals were debated and refined using iterative electronic and focus group consultation procedures involving international experts, local stakeholders and service users. Conclusions Our methods break new ground by generating and synthesising multiple sources of evidence, connecting scientific understanding with the perspectives of users, in order to develop innovative ways to meet the mental health needs of under-served groups.

  15. Do individual Spitzer young stellar object candidates enclose multiple UKIDSS sources?

    CERN Document Server

    Morales, Esteban F E

    2016-01-01

    We analyze near-infrared UKIDSS observations of a sample of 8325 objects taken from a catalog of intrinsically red sources in the Galactic plane selected in the Spitzer-GLIMPSE survey. Given the differences in angular resolution (factor >2 better in UKIDSS), our aim is to investigate whether there are multiple UKIDSS sources that might all contribute to the GLIMPSE flux, or there is only one dominant UKIDSS counterpart. We then study possible corrections to estimates of the SFR based on counts of GLIMPSE young stellar objects (YSOs). This represents an exploratory work towards the construction of a hierarchical YSO catalog. After performing PSF fitting photometry in the UKIDSS data, we implemented a technique to automatically recognize the dominant UKIDSS sources by evaluating their match with the spectral energy distribution (SED) of the associated GLIMPSE red sources. This is a generic method which could be robustly applied for matching SEDs across gaps at other wavelengths. We found that most (87.0% +- 1.6...

  16. Advanced prediction for multiple disaster sources of laneway under complicated geological conditions

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Liu Shengdong; Liu Jing; Huang Lanying; Zhao Ligui

    2011-01-01

    The driving safety in the laneway is often controlled by multiple disaster sources whicn include fault fracture zone,water-bearing body,goaf and collapse column.The advanced prediction of them has become a hotspot.Based on analysis of physical characteristics of the disaster sources and comparative evaluation of accuracy of the main advanced geophysical detection methods,we proposed a comprehensive judging criterion that tectonic interface can be judged by the elastic wave energy anomaly,strata water abundance can be discriminated by apparent resistivity response difference and establish a reasonable advanced prediction system.The results show that the concealed disaster sources are detected effectively with the accuracy rate of 80% if we use advanced prediction methods of integrated geophysics combined with correction of seismic and electromagnetic parameters,moreover,applying geological data,we may then distinguish types of the disaster sources and fulfill the qualitative forecast.Therefore,the advanced prediction system pays an important referential and instructive role in laneway driving project.

  17. A Multiple System of Radio Sources at the Core of the L723 Multipolar Outflow

    CERN Document Server

    Carrasco-Gonzalez, Carlos; Rodriguez, Luis F; Torrelles, Jose M; Osorio, Mayra; Girart, Jose M

    2007-01-01

    We present high angular resolution Very Large Array multi-epoch continuum observations at 3.6 cm and 7 mm towards the core of the L723 multipolar outflow revealing a multiple system of four radio sources suspected to be YSOs in a region of only ~4 arcsecs (1200 AU) in extent. The 3.6 cm observations show that the previously detected source VLA 2 contains a close (separation ~0.29 arcsecs or ~90 AU) radio binary, with components (A and B) along a position angle of ~150 degrees. The northern component (VLA 2A) of this binary system is also detected in the 7 mm observations, with a positive spectral index between 3.6 cm and 7 mm. In addition, the source VLA 2A is associated with extended emission along a position angle of ~115 degrees, that we interpret as outflowing shock-ionized gas that is exciting a system of HH objects with the same position angle. A third, weak 3.6 cm source, VLA 2C, that is detected also at 7 mm, is located ~0.7 arcsecs northeast of VLA 2A, and is possibly associated with the water maser ...

  18. DMPD: Suppressor of cytokine signaling (SOCS) 2, a protein with multiple functions. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 17070092 Suppressor of cytokine signaling (SOCS) 2, a protein with multiple functio...Epub 2006 Oct 27. (.png) (.svg) (.html) (.csml) Show Suppressor of cytokine signaling (SOCS) 2, a protein with multiple...SOCS) 2, a protein with multiple functions. Authors Rico-Bautista E, Flores-Morales A, Fernandez-Perez L. Pu... functions. PubmedID 17070092 Title Suppressor of cytokine signaling (

  19. Moisture transport pathways into the American Southwest from multiple oceanic sources as deduced from hydrogen isotopes.

    Science.gov (United States)

    Strong, M.; Sharp, Z. D.; Gutzler, D. S.

    2006-12-01

    switch from the Gulf of Mexico to the Gulf of California in as little as 12 hours. Variations of δDwv are also observed within vertical profiles, where multiple layers of water vapor with distinctive δDwv values are usually noted. Trajectory analyses terminated at different altitudes allow us to correlate these variations of δDwv with different source regions. It appears that within a single column of air, water vapor from multiple source regions may be present. We also conclude that water vapor contributions from evapotranspiration in this semi-arid area are too small to significantly affect δDwv values.

  20. Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin

    Science.gov (United States)

    Wei, Y.; Tang, D.; Gao, H.; Ding, Y.

    2015-12-01

    Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).

  1. E-SovTox: An online database of the main publicly-available sources of toxicity data concerning REACH-relevant chemicals published in the Russian language.

    Science.gov (United States)

    Sihtmäe, Mariliis; Blinova, Irina; Aruoja, Villem; Dubourguier, Henri-Charles; Legrand, Nicolas; Kahru, Anne

    2010-08-01

    A new open-access online database, E-SovTox, is presented. E-SovTox provides toxicological data for substances relevant to the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system, from publicly-available Russian language data sources. The database contains information selected mainly from scientific journals published during the Soviet Union era. The main information source for this database - the journal, Gigiena Truda i Professional'nye Zabolevania [Industrial Hygiene and Occupational Diseases], published between 1957 and 1992 - features acute, but also chronic, toxicity data for numerous industrial chemicals, e.g. for rats, mice, guinea-pigs and rabbits. The main goal of the abovementioned toxicity studies was to derive the maximum allowable concentration limits for industrial chemicals in the occupational health settings of the former Soviet Union. Thus, articles featured in the database include mostly data on LD50 values, skin and eye irritation, skin sensitisation and cumulative properties. Currently, the E-SovTox database contains toxicity data selected from more than 500 papers covering more than 600 chemicals. The user is provided with the main toxicity information, as well as abstracts of these papers in Russian and in English (given as provided in the original publication). The search engine allows cross-searching of the database by the name or CAS number of the compound, and the author of the paper. The E-SovTox database can be used as a decision-support tool by researchers and regulators for the hazard assessment of chemical substances.

  2. Source apportionment based on an atmospheric dispersion model and multiple linear regression analysis

    Science.gov (United States)

    Fushimi, Akihiro; Kawashima, Hiroto; Kajihara, Hideo

    Understanding the contribution of each emission source of air pollutants to ambient concentrations is important to establish effective measures for risk reduction. We have developed a source apportionment method based on an atmospheric dispersion model and multiple linear regression analysis (MLR) in conjunction with ambient concentrations simultaneously measured at points in a grid network. We used a Gaussian plume dispersion model developed by the US Environmental Protection Agency called the Industrial Source Complex model (ISC) in the method. Our method does not require emission amounts or source profiles. The method was applied to the case of benzene in the vicinity of the Keiyo Central Coastal Industrial Complex (KCCIC), one of the biggest industrial complexes in Japan. Benzene concentrations were simultaneously measured from December 2001 to July 2002 at sites in a grid network established in the KCCIC and the surrounding residential area. The method was used to estimate benzene emissions from the factories in the KCCIC and from automobiles along a section of a road, and then the annual average contribution of the KCCIC to the ambient concentrations was estimated based on the estimated emissions. The estimated contributions of the KCCIC were 65% inside the complex, 49% at 0.5-km sites, 35% at 1.5-km sites, 20% at 3.3-km sites, and 9% at a 5.6-km site. The estimated concentrations agreed well with the measured values. The estimated emissions from the factories and the road were slightly larger than those reported in the first Pollutant Release and Transfer Register (PRTR). These results support the reliability of our method. This method can be applied to other chemicals or regions to achieve reasonable source apportionments.

  3. Radiation exposure modeling for apartment living spaces with multiple radioactive sources.

    Science.gov (United States)

    Hwang, J S; Chan, C C; Wang, J D; Chang, W P

    1998-03-01

    Since late 1992, over 100 building complexes in Taiwan, including both public and private schools, and 1,000 apartments have been identified as emitting elevated levels of gamma-radiation. These high levels of gamma-radiation have been traced to construction steel contaminated with 60Co. Accurate reconstruction of the radiation exposure dosage among residents is complicated by the discovery of multiple radioactive sources within the living spaces and by the lack of comprehensive information about resident life-style and occupancy patterns within these contaminated spaces. The objective of this study was to evaluate the sensitivity of current dose reconstruction approach employed in an epidemiological study for the health effects of these occupants. We apply a statistical method of local smoothing in dose rate estimation and examine factors that are closely associated with radiation exposure from multiple radioactive sources in the apartment. Two examples are used, a simulated measurement in a hypothetical room with three radioactive sources and a real apartment in Ming-Shan Villa, one of the contaminated buildings. The simulated and estimated means are compared along 5-10 selected points of measurement: by local smoothing approach, with the furniture-adjusted space, and with the occupancy time-weighted mean. We found that the local smoothing approach came much closer to theoretical values. The local smoothing approach may serve as a refined method of radiation dose distribution modeling in exposure estimation. Before environmental exposure assessment, "highly occupied zones" (HOZs) in the contaminated spaces must be identified. Estimates of the time spent in these HOZs are essential to obtain accurate dosage values. These results will facilitate a more accurate dose reconstruction in the assessment of residential exposure in apartments with elevated levels of radioactivity.

  4. The burden of multiple myeloma: assessment on occurrence, outcomes and cost using a retrospective longitudinal study based on administrative claims database

    Directory of Open Access Journals (Sweden)

    Simona De Portu

    2011-12-01

    Full Text Available

    Objective: Multiple myeloma (MM is a malignancy of plasma cells that results in an overproduction of light and heavy chain monoclonal immunoglobulins. Multiple myeloma imposes a significant economic and humanistic burden on patients and society. The present study is aimed at assessing the burden of multiple myeloma in both epidemiologic and economic terms.

    Methods: A retrospective, naturalistic longitudinal study on the occurrence, outcome and cost of multiple myeloma using an administrative database, was performed. We selected residents of a North-eastern Region of Italy, who had their first hospital admission for multiple myeloma during the period 2001-2005. This group was followed up until 31-12-2006, death or transfers to other regional health services. Direct medical costs were quantified within the perspective of the Regional Health Service.

    Results: During the period 2001-2005, out of a population if 1.2 million inhabitants, we observed 517 incidents of patients diagnosed with MM (52% female. During the period of observation, 364 (70.4% subjects died. Total health care costs per patient over the maximum of follow-up were estimated to be 76,630 Euro for subjects younger than 70 years old and 22,892 Euro in the older group.

    Conclusions: Multiple myeloma imposes a significant epidemiological and economic burden on the healthcare system.

  5. Distinct neural responses to chord violations: a multiple source analysis study.

    Science.gov (United States)

    Garza Villarreal, Eduardo A; Brattico, Elvira; Leino, Sakari; Ostergaard, Leif; Vuust, Peter

    2011-05-10

    The human brain is constantly predicting the auditory environment by representing sequential similarities and extracting temporal regularities. It has been proposed that simple auditory regularities are extracted at lower stations of the auditory cortex and more complex ones at other brain regions, such as the prefrontal cortex. Deviations from auditory regularities elicit a family of early negative electric potentials distributed over the frontal regions of the scalp. In this study, we wished to disentangle the brain processes associated with sequential vs. hierarchical auditory regularities in a musical context by studying the event-related potentials (ERPs), the behavioral responses to violations of these regularities, and the localization of the underlying ERP generators using two different source analysis algorithms. To this aim, participants listened to musical cadences constituted by seven chords, each containing either harmonically congruous chords, harmonically incongruous chords, or harmonically congruous but mistuned chords. EEG was recorded and multiple source analysis was performed. Incongruous chords violating the rules of harmony elicited a bilateral ERAN, whereas mistuned chords within chord sequences elicited a right-lateralized MMN. We found that the dominant cortical sources for the ERAN were localized around Broca's area and its right homolog, whereas the MMN generators were localized around the primary auditory cortex. These findings suggest a predominant role of the auditory cortices in detecting sequential scale regularities and the posterior prefrontal cortex in parsing hierarchical regularities in music. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Misconceptions and biases in German students' perception of multiple energy sources: implications for science education

    Science.gov (United States)

    Lee, Roh Pin

    2016-04-01

    Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way for informed decision-making. This study examined German students' knowledge of the contribution of diverse energy sources to their nation's energy mix as well as their affective energy responses so as to identify implications for science education. Specifically, the study investigated whether and to what extent students hold mistaken beliefs about the role of multiple energy sources in their nation's energy mix, and assessed how misconceptions could act as self-generated reference points to underpin support/resistance of proposed developments. An in-depth analysis of spontaneous affective associations with five key energy sources also enabled the identification of underlying concerns driving people's energy responses and facilitated an examination of how affective perception, in acting as a heuristic, could lead to biases in energy judgment and decision-making. Finally, subgroup analysis differentiated by education and gender supported insights into a 'two culture' effect on energy perception and the challenge it poses to science education.

  7. Do individual Spitzer young stellar object candidates enclose multiple UKIDSS sources?

    Science.gov (United States)

    Morales, Esteban F. E.; Robitaille, Thomas P.

    2017-02-01

    Aims: We analyze United Kingdom Infrared Deep Sky Survey (UKIDSS) observations of a sample of 8325 objects taken from a catalog of intrinsically red sources selected in the Spitzer Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE). Given the differences in angular resolution (factor >2 better in UKIDSS), our aim is to investigate whether there are multiple UKIDSS sources that might all contribute to the GLIMPSE flux, or whether there is only one dominant UKIDSS counterpart. We then study possible corrections to estimates of the star formation rate (SFR) based on counts of GLIMPSE young stellar objects (YSOs). This represents an exploratory work toward the construction of a hierarchical YSO catalog. Methods: After performing PSF fitting photometry in the UKIDSS data, we implemented a technique to recognize the dominant UKIDSS sources automatically by evaluating their match with the spectral energy distribution (SED) of the associated GLIMPSE red sources. This is a generic method that could be robustly applied for matching SEDs across gaps at other wavelengths. Results: We found that most (87.0 ± 1.6%) of the candidate YSOs from the GLIMPSE red source catalog have only one dominant UKIDSS counterpart that matches the mid-infrared SED (fainter associated UKIDSS sources might still be present). Although at first sight this could seem surprising, given that YSOs are typically in clustered environments, we argue that within the mass range covered by the GLIMPSE YSO candidates (intermediate to high masses), clustering with objects with comparable mass is unlikely at the GLIMPSE resolution. Indeed, by performing simple clustering experiments based on a population synthesis model of Galactic YSOs, we found that although 60% of the GLIMPSE YSO enclose at least two UKIDSS sources, in general only one dominates the flux. Conclusions: No significant corrections are needed for estimates of the SFR of the Milky Way based on the assumption that the GLIMPSE YSOs

  8. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  9. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  10. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis.

    Science.gov (United States)

    Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick

    2017-06-20

    Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures-although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 d

  11. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis

    Directory of Open Access Journals (Sweden)

    Peter Lercher

    2017-06-01

    Full Text Available Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641 shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate, up to 15% (base adjustments up to 55% (full contextual model. The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway. Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to

  12. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis

    Science.gov (United States)

    Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick

    2017-01-01

    Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 d

  13. Design and optimization of an RF energy harvesting system from multiple sources

    Science.gov (United States)

    Ali, Mai; Albasha, Lutfi; Qaddoumi, Nasser

    2013-05-01

    This paper presents the design and optimization of an RF energy harvesting system from multiple sources. The RF power is harvested from four frequency bands representing five wireless systems, namely GSM, UMTS, DTV, Wi-Fi, and road tolling system. A Schottky diode model was developed based on which an RF-DC rectifier joined with a voltage multiplier circuits were designed. The simulation results of the complete RF harvesting system showed superior performance to similar state of the art systems. To further optimize the design, and to eliminate use of a non-standard CMOS process associated with Schottky diodes, the Schottky diode based rectifier was replaced by diode connected transistor configuration based on self-threshold cancellation (SVC) technique.

  14. Reconciling multiple data sources to improve accuracy of large-scale prediction of forest disease incidence

    Science.gov (United States)

    Hanks, E.M.; Hooten, M.B.; Baker, F.A.

    2011-01-01

    Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.

  15. Electrical source imaging of interictal spikes using multiple sparse volumetric priors for presurgical epileptogenic focus localization

    Directory of Open Access Journals (Sweden)

    Gregor Strobbe

    2016-01-01

    Full Text Available Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time

  16. A CORBA server for the Radiation Hybrid DataBase.

    Science.gov (United States)

    Rodriguez-Tomé, P; Helgesen, C; Lijnzaad, P; Jungfer, K

    1997-01-01

    Modern biology depends on a wide range of software interacting with a large number of data sources, varying both in size, complexity and structure. The range of important databases in molecular biology and genetics makes it crucial to overcome the problems which this multiplicity presents. At EMBL-EBI we have started to use CORBA technology to support interoperability between a variety of databases, as well as to facilitate the integration of tools that access these databases. Within the Radiation Hybrid DataBase project we are confronted daily with the interoperation and linking issues. In this paper we present a CORBA infrastructure implemented to access the Radiation Hybrid DataBase.

  17. EPSILON-CP: using deep learning to combine information from multiple sources for protein contact prediction.

    Science.gov (United States)

    Stahl, Kolja; Schneider, Michael; Brock, Oliver

    2017-06-17

    Accurately predicted contacts allow to compute the 3D structure of a protein. Since the solution space of native residue-residue contact pairs is very large, it is necessary to leverage information to identify relevant regions of the solution space, i.e. correct contacts. Every additional source of information can contribute to narrowing down candidate regions. Therefore, recent methods combined evolutionary and sequence-based information as well as evolutionary and physicochemical information. We develop a new contact predictor (EPSILON-CP) that goes beyond current methods by combining evolutionary, physicochemical, and sequence-based information. The problems resulting from the increased dimensionality and complexity of the learning problem are combated with a careful feature analysis, which results in a drastically reduced feature set. The different information sources are combined using deep neural networks. On 21 hard CASP11 FM targets, EPSILON-CP achieves a mean precision of 35.7% for top- L/10 predicted long-range contacts, which is 11% better than the CASP11 winning version of MetaPSICOV. The improvement on 1.5L is 17%. Furthermore, in this study we find that the amino acid composition, a commonly used feature, is rendered ineffective in the context of meta approaches. The size of the refined feature set decreased by 75%, enabling a significant increase in training data for machine learning, contributing significantly to the observed improvements. Exploiting as much and diverse information as possible is key to accurate contact prediction. Simply merging the information introduces new challenges. Our study suggests that critical feature analysis can improve the performance of contact prediction methods that combine multiple information sources. EPSILON-CP is available as a webservice: http://compbio.robotics.tu-berlin.de/epsilon/.

  18. Occurrence and profiling of multiple nitrosamines in source water and drinking water of China.

    Science.gov (United States)

    Wang, Wanfeng; Yu, Jianwei; An, Wei; Yang, Min

    2016-05-01

    The occurrence of multiple nitrosamines was investigated in 54 drinking water treatment plants (DWTPs) from 30 cities across major watersheds of China, and the formation potential (FP) and cancer risk of the dominant nitrosamines were studied for profiling purposes. The results showed that N-nitrosodimethylamine (NDMA), N-nitrosodiethylamine (NDEA) and N-nitrosodi-n-butylamine (NDBA) were the most abundant in DWTPs, and the concentrations in source water and finished water samples were not detected (ND) -53.6ng/L (NDMA), ND -68.5ng/L (NDEA), ND -48.2ng/L (NDBA). The frequencies of detection in source waters were 64.8%, 61.1% and 51.8%, and 57.4%, 53.7%, and 37% for finished waters, respectively. Further study indicated that the FPs of the three main nitrosamines during chloramination were higher than those during chlorination and in drinking water. The results of Principal Components Analysis (PCA) showed that ammonia was the most closely associated factor in nitrosamine formation in the investigated source water; however, there was no significant correlation between nitrosamine-FPs and the values of dominant water-quality parameters. The advanced treatment units (i.e., ozonation and biological activated carbon) used in DWTPs were able to control the nitrosamine-FPs effectively after disinfection. The target pollutants posed median and maximum cancer risks of 2.99×10(-5) and 35.5×10(-5) to the local populations due to their occurrence in drinking water.

  19. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm(2) to 30 × 30 cm(2) . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  20. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database.

    Science.gov (United States)

    Clary, Christelle M; Kestens, Yan

    2013-06-19

    Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a "fair" representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV

  1. The Opera del Vocabolario Italiano Database: Full-Text Searching Early Italian Vernacular Sources on the Web.

    Science.gov (United States)

    DuPont, Christian

    2001-01-01

    Introduces and describes the functions of the Opera del Vocabolario Italiano (OVI) database, a powerful Web-based, full-text, searchable electronic archive that contains early Italian vernacular texts whose composition may be dated prior to 1375. Examples are drawn from scholars in various disciplines who have employed the OVI in support of their…

  2. The Opera del Vocabolario Italiano Database: Full-Text Searching Early Italian Vernacular Sources on the Web.

    Science.gov (United States)

    DuPont, Christian

    2001-01-01

    Introduces and describes the functions of the Opera del Vocabolario Italiano (OVI) database, a powerful Web-based, full-text, searchable electronic archive that contains early Italian vernacular texts whose composition may be dated prior to 1375. Examples are drawn from scholars in various disciplines who have employed the OVI in support of their…

  3. Relative accuracy and availability of an Irish National Database of dispensed medication as a source of medication history information: observational study and retrospective record analysis.

    LENUS (Irish Health Repository)

    Grimes, T

    2013-01-27

    WHAT IS KNOWN AND OBJECTIVE: The medication reconciliation process begins by identifying which medicines a patient used before presentation to hospital. This is time-consuming, labour intensive and may involve interruption of clinicians. We sought to identify the availability and accuracy of data held in a national dispensing database, relative to other sources of medication history information. METHODS: For patients admitted to two acute hospitals in Ireland, a Gold Standard Pre-Admission Medication List (GSPAML) was identified and corroborated with the patient or carer. The GSPAML was compared for accuracy and availability to PAMLs from other sources, including the Health Service Executive Primary Care Reimbursement Scheme (HSE-PCRS) dispensing database. RESULTS: Some 1111 medication were assessed for 97 patients, who were median age 74 years (range 18-92 years), median four co-morbidities (range 1-9), used median 10 medications (range 3-25) and half (52%) were male. The HSE-PCRS PAML was the most accurate source compared to lists provided by the general practitioner, community pharmacist or cited in previous hospital documentation: the list agreed for 74% of the medications the patients actually used, representing complete agreement for all medications in 17% of patients. It was equally contemporaneous to other sources, but was less reliable for male than female patients, those using increasing numbers of medications and those using one or more item that was not reimbursable by the HSE. WHAT IS NEW AND CONCLUSION: The HSE-PCRS database is a relatively accurate, available and contemporaneous source of medication history information and could support acute hospital medication reconciliation.

  4. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  5. Backward probability model using multiple observations of contamination to identify groundwater contamination sources at the Massachusetts Military Reservation

    Science.gov (United States)

    Neupauer, R. M.; Wilson, J. L.

    2005-02-01

    Backward location and travel time probability density functions characterize the possible former locations (or the source location) of contamination that is observed in an aquifer. For an observed contaminant particle the backward location probability density function (PDF) describes its position at a fixed time prior to sampling, and the backward travel time probability density function describes the amount of time required for the particle to travel to the sampling location from a fixed upgradient position. The backward probability model has been developed for a single observation of contamination (e.g., Neupauer and Wilson, 1999). In practical situations, contamination is sampled at multiple locations and times, and these additional data provide information that can be used to better characterize the former position of contamination. Through Bayes' theorem we combine the individual PDFs for each observation to obtain a PDF for multiple observations that describes the possible source locations or release times of all observed contaminant particles, assuming they originated from the same instantaneous point source. We show that the multiple-observation probability density function is the normalized product of the single-observation PDFs. The additional information available from multiple observations reduces the variances of the source location and travel time probability density functions and improves the characterization of the contamination source. We apply the backward probability model to a trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR). We use four TCE samples distributed throughout the plume to obtain single-observation and multiple-observation location and travel time PDFs in three dimensions. These PDFs provide information about the possible sources of contamination. Under assumptions that the existing MMR model is properly calibrated and the conceptual model is correct the results confirm the two suspected sources of

  6. Systematic identification of yeast cell cycle transcription factors using multiple data sources

    Directory of Open Access Journals (Sweden)

    Li Wen-Hsiung

    2008-12-01

    Full Text Available Abstract Background Eukaryotic cell cycle is a complex process and is precisely regulated at many levels. Many genes specific to the cell cycle are regulated transcriptionally and are expressed just before they are needed. To understand the cell cycle process, it is important to identify the cell cycle transcription factors (TFs that regulate the expression of cell cycle-regulated genes. Results We developed a method to identify cell cycle TFs in yeast by integrating current ChIP-chip, mutant, transcription factor binding site (TFBS, and cell cycle gene expression data. We identified 17 cell cycle TFs, 12 of which are known cell cycle TFs, while the remaining five (Ash1, Rlm1, Ste12, Stp1, Tec1 are putative novel cell cycle TFs. For each cell cycle TF, we assigned specific cell cycle phases in which the TF functions and identified the time lag for the TF to exert regulatory effects on its target genes. We also identified 178 novel cell cycle-regulated genes, among which 59 have unknown functions, but they may now be annotated as cell cycle-regulated genes. Most of our predictions are supported by previous experimental or computational studies. Furthermore, a high confidence TF-gene regulatory matrix is derived as a byproduct of our method. Each TF-gene regulatory relationship in this matrix is supported by at least three data sources: gene expression, TFBS, and ChIP-chip or/and mutant data. We show that our method performs better than four existing methods for identifying yeast cell cycle TFs. Finally, an application of our method to different cell cycle gene expression datasets suggests that our method is robust. Conclusion Our method is effective for identifying yeast cell cycle TFs and cell cycle-regulated genes. Many of our predictions are validated by the literature. Our study shows that integrating multiple data sources is a powerful approach to studying complex biological systems.

  7. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    Science.gov (United States)

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  8. JET2 Viewer: a database of predicted multiple, possibly overlapping, protein–protein interaction sites for PDB structures

    Science.gov (United States)

    Ripoche, Hugues; Laine, Elodie; Ceres, Nicoletta; Carbone, Alessandra

    2017-01-01

    The database JET2 Viewer, openly accessible at http://www.jet2viewer.upmc.fr/, reports putative protein binding sites for all three-dimensional (3D) structures available in the Protein Data Bank (PDB). This knowledge base was generated by applying the computational method JET2 at large-scale on more than 20 000 chains. JET2 strategy yields very precise predictions of interacting surfaces and unravels their evolutionary process and complexity. JET2 Viewer provides an online intelligent display, including interactive 3D visualization of the binding sites mapped onto PDB structures and suitable files recording JET2 analyses. Predictions were evaluated on more than 15 000 experimentally characterized protein interfaces. This is, to our knowledge, the largest evaluation of a protein binding site prediction method. The overall performance of JET2 on all interfaces are: Sen = 52.52, PPV = 51.24, Spe = 80.05, Acc = 75.89. The data can be used to foster new strategies for protein–protein interactions modulation and interaction surface redesign. PMID:27899675

  9. Multiple plant-wax compounds record differential sources and ecosystem structure in large river catchments

    Science.gov (United States)

    Hemingway, Jordon D.; Schefuß, Enno; Dinga, Bienvenu Jean; Pryer, Helena; Galy, Valier V.

    2016-07-01

    The concentrations, distributions, and stable carbon isotopes (δ13C) of plant waxes carried by fluvial suspended sediments contain valuable information about terrestrial ecosystem characteristics. To properly interpret past changes recorded in sedimentary archives it is crucial to understand the sources and variability of exported plant waxes in modern systems on seasonal to inter-annual timescales. To determine such variability, we present concentrations and δ13C compositions of three compound classes (n-alkanes, n-alcohols, n-alkanoic acids) in a 34-month time series of suspended sediments from the outflow of the Congo River. We show that exported plant-dominated n-alkanes (C25-C35) represent a mixture of C3 and C4 end members, each with distinct molecular distributions, as evidenced by an 8.1 ± 0.7‰ (±1σ standard deviation) spread in δ13C values across chain-lengths, and weak correlations between individual homologue concentrations (r = 0.52-0.94). In contrast, plant-dominated n-alcohols (C26-C36) and n-alkanoic acids (C26-C36) exhibit stronger positive correlations (r = 0.70-0.99) between homologue concentrations and depleted δ13C values (individual homologues average ⩽-31.3‰ and -30.8‰, respectively), with lower δ13C variability across chain-lengths (2.6 ± 0.6‰ and 2.0 ± 1.1‰, respectively). All individual plant-wax lipids show little temporal δ13C variability throughout the time-series (1σ ⩽ 0.9‰), indicating that their stable carbon isotopes are not a sensitive tracer for temporal changes in plant-wax source in the Congo basin on seasonal to inter-annual timescales. Carbon-normalized concentrations and relative abundances of n-alcohols (19-58% of total plant-wax lipids) and n-alkanoic acids (26-76%) respond rapidly to seasonal changes in runoff, indicating that they are mostly derived from a recently entrained local source. In contrast, a lack of correlation with discharge and low, stable relative abundances (5-16%) indicate that

  10. Assessment of malignancy risk in patients with multiple sclerosis treated with intramuscular interferon beta-1a: retrospective evaluation using a health insurance claims database and postmarketing surveillance data

    Directory of Open Access Journals (Sweden)

    Bloomgren G

    2012-06-01

    Full Text Available Gary Bloomgren, Bjørn Sperling, Kimberly Cushing, Madé WentenBiogen Idec Inc., Weston, MA, USABackground: Intramuscular interferon beta-1a (IFNβ-1a, a multiple sclerosis (MS therapy that has been commercially available for over a decade, provides a unique opportunity to retrospectively assess postmarketing data for evidence of malignancy risk, compared with relatively limited data available for more recently approved therapies. Postmarketing and claims data were analyzed to determine the risk of malignancy in MS patients treated with intramuscular IFNβ-1a.Materials and methods: The cumulative reporting rates of suspected adverse drug reactions coded to malignancy in the intramuscular IFNβ-1a global safety database were compared with malignancy incidence rates in the World Health Organization GLOBOCAN database. In addition, using data from a large US claims database, the cumulative prevalence of malignancy in MS patients treated with intramuscular IFNβ-1a was compared with non-MS population controls, MS patients without intramuscular IFNβ-1a use, and untreated MS patients. Mean follow-up was approximately 3 years for all groups, ie, 3.1 years for the intramuscular IFNβ-1a group (range 0.02–6.0 years, 2.6 years for non-MS population controls (range 0–6.0 years, 2.6 years for the intramuscular IFNβ-1a nonuse group (range 0.01–6.0 years, and 2.4 years for the untreated MS group (range 0.01–6.0 years.Results: An estimated 402,250 patients received intramuscular IFNβ-1a during the postmarketing period. Cumulative reporting rates of malignancy in this population were consistent with GLOBOCAN incidence rates observed within the general population. The claims database included 12,894 MS patients who received intramuscular IFNβ-1a. No significant difference in malignancy prevalence was observed in intramuscular IFNβ-1a users compared with other groups.Conclusion: Results from this evaluation provide no evidence of an increased risk of

  11. BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences.

    Science.gov (United States)

    McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org.

  12. Development of a microfocus x-ray tube with multiple excitation sources

    Science.gov (United States)

    Maeo, Shuji; Krämer, Markus; Taniguchi, Kazuo

    2009-03-01

    A microfocus x-ray tube with multiple targets and an electron gun with a focal spot size of 10 μm in diameter has been developed. The electron gun contains a LaB6 cathode and an Einzel lens. The x-ray tube can be operated at 50 W (50 kV, 1 mA) and has three targets, namely, Cr, W, and Rh on the anode that can be selected completely by moving the anode position. A focal spot size of 10 μm in diameter can be achieved at 0.5 mA current. As demonstration of the usability of a multiexcitation x-ray tube, the fluorescence x-rays have been measured using a powder specimen mixed of TiO2, Co, and Zr of the same quantity. The differences of excitation efficiency have clearly appeared according to the change in excitation source. From the results discussed here, it can be expected that the presented x-ray tube will be a powerful tool in microx-ray fluorescence spectrometers and various x-ray instruments.

  13. Invasive meningococcal disease in England: assessing disease burden through linkage of multiple national data sources.

    Science.gov (United States)

    Ladhani, Shamez N; Waight, Pauline A; Ribeiro, Sonia; Ramsay, Mary E

    2015-12-01

    In England, Public Health England conducts enhanced surveillance of invasive meningococcal disease (IMD). The continuing decline in reported IMD cases has raised concerns that the MRU may be underestimating true IMD incidence. We linked five national datasets to estimate disease burden over five years, including PHE Meningococcal Reference Unit (MRU) confirmations, hospital episode statistics (HES), electronic reports of significant infections by National Health Service (NHS) Hospitals, death registrations and private laboratory reports. During 2007-11, MRU confirmed 5115 IMD cases and 4275 (84%) matched to HES, including 3935 (92%) with A39* (meningococcal disease) and 340 (8%) with G00* (bacterial meningo-encephalitis) ICD-10 codes. An additional 2792 hospitalised cases with an A39* code were identified in HES. Of these, 1465 (52%) matched to one of 53,806 samples tested PCR-negative for IMD by MRU and only 73 of the remaining 1327 hospitalised A39* cases were confirmed locally or by a private laboratory. The characteristics of hospitalised cases without laboratory confirmation were similar to PCR-negative than PCR-positive IMD cases. Interrogation of multiple national data sources identified very few laboratory confirmations in addition to the MRU-confirmed cases. The large number of unconfirmed and PCR-negative cases in HES suggests increased awareness among clinicians with low thresholds for hospitalising patients with suspected IMD.

  14. Slow invasion of a nonwetting fluid from multiple inlet sources in a thin porous layer.

    Science.gov (United States)

    Ceballos, L; Prat, M; Duru, P

    2011-11-01

    We numerically study the process of quasistatic invasion of a nonwetting fluid in 2D and 3D porous layers from multiple inlet injection sources and show that a porous layer acts as a two-phase filter as a result of the repeated convergence of flow paths: The probability for a pore at the outlet to be a breakthrough point is significantly lower than the fraction of active injection points at the inlet owing to the merging within the porous layer of liquid paths originating from different inlet injection points. The study of the breakthrough point statistics indicates that the number of breakthrough points diminishes with the system thickness and that the behavior of thin layers, defined here as systems of typical thicknesses of less than 15 lattice spacing units (≈15 pore or grain mean sizes), is distinct from thicker layers. For thicker systems, it is found that the probability of an outlet pore to be a breakthrough pore scales as l(1-d) where l is the system thickness and d is the space dimensionality, whereas, a power law behavior is not obtained with a thin system. Other properties, such as the invading phase occupancy profiles are studied. We also described a kinetic algorithm that allowed us to compute the occurrence times of breakthrough points. The distribution of these times is markedly different in 2D and in 3D.

  15. ON SOURCE ANALYSIS BY WAVE SPLITTING WITH APPLICATIONS IN INVERSE SCATTERING OF MULTIPLE OBSTACLES

    Institute of Scientific and Technical Information of China (English)

    Fahmi ben Hassen; Jijun Liu; Roland Potthast

    2007-01-01

    We study wave splitting procedures for acoustic or electromagnetic scattering problems. The idea of these procedures is to split some scattered field into a sum of fields coming from different spatial regions such that this information can be used either for inversion algorithms or for active noise control. Splitting algorithms can be based on general boundary layer potential representation or Green's representation formula. We will prove the unique decomposition of scattered wave outside the specified reference domain G and the unique decomposition of far-field pattern with respect to different reference domain G. Further, we employ the splitting technique for field reconstruction for a scatterer with two or more separate components, by combining it with the point source method for wave recovery. Using the decomposition of scattered wave as well as its far-field pattern, the wave splitting procedure proposed in this paper gives an efficient way to the computation of scattered wave near the obstacle, from which the multiple obstacles which cause the far-field pattern can be reconstructed separately. This considerably extends the range of the decomposition methods in the area of inverse scattering. Finally, we will provide numerical examples to demonstrate the feasibility of the splitting method.

  16. Use of multiple data sources to estimate the economic cost of dengue illness in Malaysia.

    Science.gov (United States)

    Shepard, Donald S; Undurraga, Eduardo A; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan

    2012-11-01

    Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue.

  17. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  18. SCGPred: A Score-based Method for Gene Structure Prediction by Combining Multiple Sources of Evidence

    Institute of Scientific and Technical Information of China (English)

    Xiao Li; Qingan Ren; Yang Weng; Haoyang Cai; Yunmin Zhu; Yizheng Zhang

    2008-01-01

    Predicting protein-coding genes still remains a significant challenge. Although a variety of computational programs that use commonly machine learning methods have emerged, the accuracy of predictions remains a low level when implementing in large genomic sequences. Moreover, computational gene finding in newly sequenced genomes is especially a difficult task due to the absence of a training set of abundant validated genes. Here we present a new gene-finding program, SCGPred,to improve the accuracy of prediction by combining multiple sources of evidence.SCGPred can perform both supervised method in previously well-studied genomes and unsupervised one in novel genomes. By testing with datasets composed of large DNA sequences from human and a novel genome of Ustilago maydi, SCGPred gains a significant improvement in comparison to the popular ab initio gene predictors. We also demonstrate that SCGPred can significantly improve prediction in novel genomes by combining several foreign gene finders with similarity alignments, which is superior to other unsupervised methods. Therefore, SCGPred can serve as an alternative gene-finding tool for newly sequenced eukaryotic genomes. The program is freely available at http://bio.scu.edu.cn/SCGPred/.

  19. Urban land cover thematic disaggregation, employing datasets from multiple sources and RandomForests modeling

    Science.gov (United States)

    Gounaridis, Dimitrios; Koukoulas, Sotirios

    2016-09-01

    Urban land cover mapping has lately attracted a vast amount of attention as it closely relates to a broad scope of scientific and management applications. Late methodological and technological advancements facilitate the development of datasets with improved accuracy. However, thematic resolution of urban land cover has received much less attention so far, a fact that hampers the produced datasets utility. This paper seeks to provide insights towards the improvement of thematic resolution of urban land cover classification. We integrate existing, readily available and with acceptable accuracies datasets from multiple sources, with remote sensing techniques. The study site is Greece and the urban land cover is classified nationwide into five classes, using the RandomForests algorithm. Results allowed us to quantify, for the first time with a good accuracy, the proportion that is occupied by each different urban land cover class. The total area covered by urban land cover is 2280 km2 (1.76% of total terrestrial area), the dominant class is discontinuous dense urban fabric (50.71% of urban land cover) and the least occurring class is discontinuous very low density urban fabric (2.06% of urban land cover).

  20. An objective-oriented approach to program comprehension using multiple information sources

    Institute of Scientific and Technical Information of China (English)

    ZHAO Wei; ZHANG Lu; SUN JiaSu; MEI Hong

    2008-01-01

    Program comprehension is a key activity throughout software maintenance and reuse.The knowledge acquired through comprehending programs can guide en-gineers to perform various kinds of software maintenance and reuse tasks.The effective comprehension strategy and the associated efficient approach,as well as the sophisticated tool support,are the indispensable elements for an entire solu-tion to program comprehension to reduce the high costs of this nontrivial activity.This paper presents an objective-oriented comprehension strategy,contrasting to the traditional comprehensive understanding strategy in the literature.It is a kind of on-demand understanding for specific tasks and more effective in practice.In ad-dition,using multiple information sources to understand programs is proposed with the corresponding framework.From these two points of views,we propose a feature-oriented program comprehension approach using requirement documenta-tion.This approach aims at a specific category of feature-related software mainte-nance and reuse tasks.Case studies are conducted to evaluate the proposed solu-tion.Results from the studied cases show that the experimental prototype provides more explicit advices for software engineers when performing these tasks.

  1. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org.

  2. A methodology for combining multiple commercial data sources to improve measurement of the food and alcohol environment: applications of geographical information systems

    Directory of Open Access Journals (Sweden)

    Dara D. Mendez

    2014-11-01

    Full Text Available Commercial data sources have been increasingly used to measure and locate community resources. We describe a methodology for combining and comparing the differences in commercial data of the food and alcohol environment. We used commercial data from two commercial databases (InfoUSA and Dun&Bradstreet for 2003 and 2009 to obtain infor- mation on food and alcohol establishments and developed a matching process using computer algorithms and manual review by applying ArcGIS to geocode addresses, standard industrial classification and North American industry classification tax- onomy for type of establishment and establishment name. We constructed population and area-based density measures (e.g. grocery stores and assessed differences across data sources and used ArcGIS to map the densities. The matching process resulted in 8,705 and 7,078 unique establishments for 2003 and 2009, respectively. There were more establishments cap- tured in the combined dataset than relying on one data source alone, and the additional establishments captured ranged from 1,255 to 2,752 in 2009. The correlations for the density measures between the two data sources was highest for alcohol out- lets (r = 0.75 and 0.79 for per capita and area, respectively and lowest for grocery stores/supermarkets (r = 0.32 for both. This process for applying geographical information systems to combine multiple commercial data sources and develop meas- ures of the food and alcohol environment captured more establishments than relying on one data source alone. This replic- able methodology was found to be useful for understanding the food and alcohol environment when local or public data are limited.

  3. Physical and mental health comorbidity is common in people with multiple sclerosis: nationally representative cross-sectional population database analysis.

    Science.gov (United States)

    Simpson, Robert J; McLean, Gary; Guthrie, Bruce; Mair, Frances; Mercer, Stewart W

    2014-06-13

    Comorbidity in Multiple Sclerosis (MS) is associated with worse health and higher mortality. This study aims to describe clinician recorded comorbidities in people with MS. 39 comorbidities in 3826 people with MS aged ≥25 years were compared against 1,268,859 controls. Results were analysed by age, gender, and socioeconomic status, with unadjusted and adjusted Odds Ratios (ORs) calculated using logistic regression. People with MS were more likely to have one (OR 2.44; 95% CI 2.26-2.64), two (OR 1.49; 95% CI 1.38-1.62), three (OR 1.86; 95% CI 1.69-2.04), four or more (OR 1.61; 95% CI 1.47-1.77) non-MS chronic conditions than controls, and greater mental health comorbidity (OR 2.94; 95% CI 2.75-3.14), which increased as the number of physical comorbidities rose. Cardiovascular conditions, including atrial fibrillation (OR 0.49; 95% CI 0.36-0.67), chronic kidney disease (OR 0.51; 95% CI 0.40-0.65), heart failure (OR 0.62; 95% CI 0.45-0.85), coronary heart disease (OR 0.64; 95% CI 0.52-0.71), and hypertension (OR 0.65; 95% CI 0.59-0.72) were significantly less common in people with MS. People with MS have excess multiple chronic conditions, with associated increased mental health comorbidity. The low recorded cardiovascular comorbidity warrants further investigation.

  4. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  5. Understanding selection bias, time-lags and measurement bias in secondary data sources: Putting the Encyclopedia of Associations database in broader context.

    Science.gov (United States)

    Bevan, Shaun; Baumgartner, Frank R; Johnson, Erik W; McCarthy, John D

    2013-11-01

    Secondary data gathered for purposes other than research play an important role in the social sciences. A recent data release has made an important source of publicly available data on associational interests, the Encyclopedia of Associations (EA), readily accessible to scholars (www.policyagendas.org). In this paper we introduce these new data and systematically investigate issues of lag between events and subsequent reporting in the EA, as these have important but under-appreciated effects on time-series statistical models. We further analyze the accuracy and coverage of the database in numerous ways. Our study serves as a guide to potential users of this database, but we also reflect upon a number of issues that should concern all researchers who use secondary data such as newspaper records, IRS reports and FBI Uniform Crime Reports. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. COXPRESdb in 2015: coexpression database for animal species by DNA-microarray and RNAseq-based expression data with multiple quality assessment systems.

    Science.gov (United States)

    Okamura, Yasunobu; Aoki, Yuichi; Obayashi, Takeshi; Tadaka, Shu; Ito, Satoshi; Narise, Takafumi; Kinoshita, Kengo

    2015-01-01

    The COXPRESdb (http://coxpresdb.jp) provides gene coexpression relationships for animal species. Here, we report the updates of the database, mainly focusing on the following two points. For the first point, we added RNAseq-based gene coexpression data for three species (human, mouse and fly), and largely increased the number of microarray experiments to nine species. The increase of the number of expression data with multiple platforms could enhance the reliability of coexpression data. For the second point, we refined the data assessment procedures, for each coexpressed gene list and for the total performance of a platform. The assessment of coexpressed gene list now uses more reasonable P-values derived from platform-specific null distribution. These developments greatly reduced pseudo-predictions for directly associated genes, thus expanding the reliability of coexpression data to design new experiments and to discuss experimental results. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. EVIDENCE FOR MULTIPLE SOURCES OF {sup 10}Be IN THE EARLY SOLAR SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Wielandt, Daniel; Krot, Alexander N.; Bizzarro, Martin [Centre for Star and Planet Formation, Natural History Museum of Denmark, University of Copenhagen, Copenhagen DK-1350 (Denmark); Nagashima, Kazuhide; Huss, Gary R. [Hawai' i Institute of Geophysics and Planetology, University of Hawai' i at Manoa, HI 96822 (United States); Ivanova, Marina A. [Vernadsky Institute of Geochemistry and Analytical Chemistry, Moscow 119991 (Russian Federation)

    2012-04-01

    Beryllium-10 is a short-lived radionuclide (t{sub 1/2} = 1.4 Myr) uniquely synthesized by spallation reactions and inferred to have been present when the solar system's oldest solids (calcium-aluminum-rich inclusions, CAIs) formed. Yet, the astrophysical site of {sup 10}Be nucleosynthesis is uncertain. We report Li-Be-B isotope measurements of CAIs from CV chondrites, including CAIs that formed with the canonical {sup 26}Al/{sup 27}Al ratio of {approx}5 Multiplication-Sign 10{sup -5} (canonical CAIs) and CAIs with Fractionation and Unidentified Nuclear isotope effects (FUN-CAIs) characterized by {sup 26}Al/{sup 27}Al ratios much lower than the canonical value. Our measurements demonstrate the presence of four distinct fossil {sup 10}Be/{sup 9}Be isochrons, lower in the FUN-CAIs than in the canonical CAIs, and variable within these classes. Given that FUN-CAI precursors escaped evaporation-recondensation prior to evaporative melting, we suggest that the {sup 10}Be/{sup 9}Be ratio recorded by FUN-CAIs represents a baseline level present in presolar material inherited from the protosolar molecular cloud, generated via enhanced trapping of galactic cosmic rays. The higher and possibly variable apparent {sup 10}Be/{sup 9}Be ratios of canonical CAIs reflect additional spallogenesis, either in the gaseous CAI-forming reservoir, or in the inclusions themselves: this indicates at least two nucleosynthetic sources of {sup 10}Be in the early solar system. The most promising locale for {sup 10}Be synthesis is close to the proto-Sun during its early mass-accreting stages, as these are thought to coincide with periods of intense particle irradiation occurring on timescales significantly shorter than the formation interval of canonical CAIs.

  8. Multiple species beam production on laser ion source for electron beam ion source in Brookhaven National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Sekine, M., E-mail: sekine.m.ae@m.titech.ac.jp [Research Laboratory for Nuclear Reactors, Tokyo Institute of Technology, Meguro, Tokyo (Japan); Riken, Wako, Saitama (Japan); Ikeda, S. [Riken, Wako, Saitama (Japan); Department of Energy Science, Tokyo Institute of Technology, Yokohama, Kanagawa (Japan); Hayashizaki, N. [Research Laboratory for Nuclear Reactors, Tokyo Institute of Technology, Meguro, Tokyo (Japan); Kanesue, T.; Okamura, M. [Collider-Accelerator Department, Brookhaven National Laboratory, Upton, New York 11973 (United States)

    2014-02-15

    Extracted ion beams from the test laser ion source (LIS) were transported through a test beam transport line which is almost identical to the actual primary beam transport in the current electron beam ion source apparatus. The tested species were C, Al, Si, Cr, Fe, Cu, Ag, Ta, and Au. The all measured beam currents fulfilled the requirements. However, in the case of light mass ions, the recorded emittance shapes have larger aberrations and the RMS values are higher than 0.06 π mm mrad, which is the design goal. Since we have margin to enhance the beam current, if we then allow some beam losses at the injection point, the number of the single charged ions within the acceptance can be supplied. For heaver ions like Ag, Ta, and Au, the LIS showed very good performance.

  9. Outage Performance of Cooperative Relay Selection with Multiple Source and Destination Antennas over Dissimilar Nakagami-m Fading Channels

    Science.gov (United States)

    Lee, Wooju; Yoon, Dongweon

    Cooperative relay selection, in which one of multiple relays is selected to retransmit the source signal to the destination, has received considerable attention in recent years, because it is a simple way to obtain cooperative diversity in wireless networks. The exact expression of outage probability for a decode-and-forward cooperative relay selection with multiple source and destination antennas over Rayleigh fading channels was recently derived in [9]. In this letter, we derive the exact expressions of outage probability and diversity-multiplexing tradeoff over independent and non-identically distributed Nakagami-m fading channels as an extension of [9]. We then analyze the effects of various parameters such as fading conditions, number of relays, and number of source and destination antennas on the outage probability.

  10. Using Multiple-Variable Matching to Identify Cultural Sources of Differential Item Functioning

    Science.gov (United States)

    Wu, Amery D.; Ercikan, Kadriye

    2006-01-01

    Identifying the sources of differential item functioning (DIF) in international assessments is very challenging, because such sources are often nebulous and intertwined. Even though researchers frequently focus on test translation and content area, few actually go beyond these factors to investigate other cultural sources of DIF. This article…

  11. submitter BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences

    CERN Document Server

    McQuilton, Peter; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the ...

  12. Viewpoints: a framework for object oriented database modelling and distribution

    Directory of Open Access Journals (Sweden)

    Fouzia Benchikha

    2006-01-01

    Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.

  13. White light sources based on multiple precision selective micro-filling of structured optical waveguides.

    Science.gov (United States)

    Canning, J; Stevenson, M; Yip, T K; Lim, S K; Martelli, C

    2008-09-29

    Multiple precision selective micro-filling of a structured optical fibre using three luminescent dyes enables the simultaneous capture of red, blue and green luminescence within the core to generate white light. The technology opens up a new approach to integration and superposition of the properties of multiple materials to create unique composite properties within structured waveguides.

  14. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    Science.gov (United States)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  15. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish.

    Directory of Open Access Journals (Sweden)

    James Jaeyoon Jun

    Full Text Available In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal's positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole

  16. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    Science.gov (United States)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of

  17. Unified Regional Tomography and Source Moment Tensor Inversions Based on Finite-Difference Strain Green Tensor Databases

    Science.gov (United States)

    2009-09-30

    for earthquakes in southern California, Bull. Seism . Soc. Am. 94: 1748-1761. Liu, Q., and J. Tromp (2006). Finite-frequency kernels based on adjoint...2008a). Component-dependent Frechet sensitivity kernels and utility of three- component seismic records. Bull. Seism . Soc. Am. 98: doi.10.1785/0120070283...L., P. Chen, and T. H. Jordan (2006). Strain Green tensor, reciprocity, and their applications to seismic source and structure studies, Bull. Seism

  18. Incidence and prevalence of multiple sclerosis in the UK 1990-2010: a descriptive study in the General Practice Research Database.

    Science.gov (United States)

    Mackenzie, I S; Morant, S V; Bloomfield, G A; MacDonald, T M; O'Riordan, J

    2014-01-01

    To estimate the incidence and prevalence of multiple sclerosis (MS) by age and describe secular trends and geographic variations within the UK over the 20-year period between 1990 and 2010 and hence to provide updated information on the impact of MS throughout the UK. A descriptive study. The study was carried out in the General Practice Research Database (GPRD), a primary care database representative of the UK population. Incidence and prevalence of MS per 100 000 population. Secular and geographical trends in incidence and prevalence of MS. The prevalence of MS recorded in GPRD increased by about 2.4% per year (95% CI 2.3% to 2.6%) reaching 285.8 per 100 000 in women (95% CI 278.7 to 293.1) and 113.1 per 100 000 in men (95% CI 108.6 to 117.7) by 2010. There was a consistent downward trend in incidence of MS reaching 11.52 per 100 000/year (95% CI 10.96 to 12.11) in women and 4.84 per 100 000/year (95% CI 4.54 to 5.16) in men by 2010. Peak incidence occurred between ages 40 and 50 years and maximum prevalence between ages 55 and 60 years. Women accounted for 72% of prevalent and 71% of incident cases. Scotland had the highest incidence and prevalence rates in the UK. We estimate that 126 669 people were living with MS in the UK in 2010 (203.4 per 100 000 population) and that 6003 new cases were diagnosed that year (9.64 per 100 000/year). There is an increasing population living longer with MS, which has important implications for resource allocation for MS in the UK.

  19. Frequency-swept laser light source at 1050 nm with higher bandwidth due to multiple semiconductor optical amplifiers in series

    DEFF Research Database (Denmark)

    Marschall, Sebastian; Thrane, Lars; Andersen, Peter E.;

    2009-01-01

    We report on the development of an all-fiber frequency-swept laser light source in the 1050 nm range based on semiconductor optical amplifiers (SOA) with improved bandwidth due to multiple gain media. It is demonstrated that even two SOAs with nearly equal gain spectra can improve the performance......Hz) the SSOA configuration can maintain a significantly higher bandwidth (~50% higher) compared to the MOPA architecture. Correspondingly narrower point spread functions can be generated in a Michelson interferometer.......We report on the development of an all-fiber frequency-swept laser light source in the 1050 nm range based on semiconductor optical amplifiers (SOA) with improved bandwidth due to multiple gain media. It is demonstrated that even two SOAs with nearly equal gain spectra can improve the performance...

  20. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  1. A Novel Method for Separating and Locating Multiple Partial Discharge Sources in a Substation.

    Science.gov (United States)

    Li, Pengfei; Zhou, Wenjun; Yang, Shuai; Liu, Yushun; Tian, Yan; Wang, Yong

    2017-01-27

    To separate and locate multi-partial discharge (PD) sources in a substation, the use of spectrum differences of ultra-high frequency signals radiated from various sources as characteristic parameters has been previously reported. However, the separation success rate was poor when signal-to-noise ratio was low, and the localization result was a coordinate on two-dimensional plane. In this paper, a novel method is proposed to improve the separation rate and the localization accuracy. A directional measuring platform is built using two directional antennas. The time delay (TD) of the signals captured by the antennas is calculated, and TD sequences are obtained by rotating the platform at different angles. The sequences are separated with the TD distribution feature, and the directions of the multi-PD sources are calculated. The PD sources are located by directions using the error probability method. To verify the method, a simulated model with three PD sources was established by XFdtd. Simulation results show that the separation rate is increased from 71% to 95% compared with the previous method, and an accurate three-dimensional localization result was obtained. A field test with two PD sources was carried out, and the sources were separated and located accurately by the proposed method.

  2. Congestion control for ATM multiplexers using neural networks:multiple sources/single buffer scenario

    Institute of Scientific and Technical Information of China (English)

    杜树新; 袁石勇

    2004-01-01

    A new neural network based method for solving the problem of congestion control arising at the user network interface (UNI) of ATM networks is proposed in this paper. Unlike the previous methods where the coding rate for all traffic sources as controller output signals is tuned in a body, the proposed method adjusts the coding rate for only a part of the traffic sources while the remainder sources send the cells in the previous coding rate in case of occurrence of congestion. The controller output signals include the source coding rate and the percentage of the sources that send cells at the corresponding coding rate. The control methods not only minimize the cell loss rate but also guarantee the quality of information (such as voice sources) fed into the multiplexer buffer. Simulations with 150 ADPCM voice sources fed into the multiplexer buffer showed that the proposed methods have advantage over the previous methods in the aspect of the performance indices such as cell loss rate (CLR) and voice quality.

  3. The test beamline of the European Spallation Source - Instrumentation development and wavelength frame multiplication

    DEFF Research Database (Denmark)

    Woracek, R.; Hofmann, T.; Bulat, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources...... which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor...

  4. A 5-kg time-resolved luminescence photometer with multiple excitation sources

    Science.gov (United States)

    A portable fluorometer was developed to detect food contaminants and environmental pollutants including, in particular, two classes of antibiotics: tetracyclines and fluoroquinolones. Time resolution was implemented to take advantage of lanthanide-sensitized luminescence. Excitation sources included...

  5. μ-diff: An open-source Matlab toolbox for computing multiple scattering problems by disks

    Science.gov (United States)

    Thierry, Bertrand; Antoine, Xavier; Chniti, Chokri; Alzubaidi, Hasan

    2015-07-01

    The aim of this paper is to describe a Matlab toolbox, called μ-diff, for modeling and numerically solving two-dimensional complex multiple scattering by a large collection of circular cylinders. The approximation methods in μ-diff are based on the Fourier series expansions of the four basic integral operators arising in scattering theory. Based on these expressions, an efficient spectrally accurate finite-dimensional solution of multiple scattering problems can be simply obtained for complex media even when many scatterers are considered as well as large frequencies. The solution of the global linear system to solve can use either direct solvers or preconditioned iterative Krylov subspace solvers for block Toeplitz matrices. Based on this approach, this paper explains how the code is built and organized. Some complete numerical examples of applications (direct and inverse scattering) are provided to show that μ-diff is a flexible, efficient and robust toolbox for solving some complex multiple scattering problems.

  6. The Candida genome database incorporates multiple Candida species: multispecies search and analysis tools with curated gene and protein information for Candida albicans and Candida glabrata.

    Science.gov (United States)

    Inglis, Diane O; Arnaud, Martha B; Binkley, Jonathan; Shah, Prachi; Skrzypek, Marek S; Wymore, Farrell; Binkley, Gail; Miyasato, Stuart R; Simison, Matt; Sherlock, Gavin

    2012-01-01

    The Candida Genome Database (CGD, http://www.candidagenome.org/) is an internet-based resource that provides centralized access to genomic sequence data and manually curated functional information about genes and proteins of the fungal pathogen Candida albicans and other Candida species. As the scope of Candida research, and the number of sequenced strains and related species, has grown in recent years, the need for expanded genomic resources has also grown. To answer this need, CGD has expanded beyond storing data solely for C. albicans, now integrating data from multiple species. Herein we describe the incorporation of this multispecies information, which includes curated gene information and the reference sequence for C. glabrata, as well as orthology relationships that interconnect Locus Summary pages, allowing easy navigation between genes of C. albicans and C. glabrata. These orthology relationships are also used to predict GO annotations of their products. We have also added protein information pages that display domains, structural information and physicochemical properties; bibliographic pages highlighting important topic areas in Candida biology; and a laboratory strain lineage page that describes the lineage of commonly used laboratory strains. All of these data are freely available at http://www.candidagenome.org/. We welcome feedback from the research community at candida-curator@lists.stanford.edu.

  7. A best-case probe, light source, and database for H2O absorption thermometry to 2100 K and 50 bar

    Science.gov (United States)

    Brittelle, Mack S.

    This work aspired to improve the ability of forthcoming researchers to utilize near IR H2O absorption spectroscopy for thermometry with development of three best-case techniques: the design of novel high temperature sapphire optical access probes, the construction of a fixed-wavelength H 2O absorption spectroscopy system enhanced by an on-board external-cavity diode laser, and the creation of an architecture for a high-temperature and -pressure H2O absorption cross-section database. Each area's main goal was to realize the best-case for direct absorption spectroscopy H2O vapor thermometry at combustion conditions. Optical access to combustion devices is explored through the design and implementation of two versions of novel high-temperature (2000 K) sapphire immersion probes (HTSIPs) for use in ambient flames and gas turbine combustors. The development and evaluation of a fixed wavelength H2O absorption spectroscopy (FWAS) system that is demonstrates how the ECDL allows the system to operate in multiple modes that enhance FWAS measurement accuracy by improving wavelength position monitoring, and reducing non-absorption based contamination in spectral scans. The architecture of a high temperature (21000 K) and pressure (50 bar) database (HTPD) is developed that can enhance absorption spectroscopy based thermometry. The HTPD formation is developed by the evaluation of two approaches, a line-by-line (LBL) approach, where transition lineshape parameters are extracted from spectra and used along with a physics based model to allow the simulation of spectra over a wide range of temperatures and pressures, or an absorption cross-section (sigmaabs) approach, where spectra generated from a high temperature and pressure furnace are catalog spectra at various conditions forming a database of absorption cross-sections that is then interpolated to provide a simulated absorbance spectra based on measured reference grade spectra. Utilizing near future reference grade H2O

  8. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  9. From Big Data to Smart Data for Pharmacovigilance: The Role of Healthcare Databases and Other Emerging Sources.

    Science.gov (United States)

    Trifirò, Gianluca; Sultana, Janet; Bate, Andrew

    2017-08-24

    In the last decade 'big data' has become a buzzword used in several industrial sectors, including but not limited to telephony, finance and healthcare. Despite its popularity, it is not always clear what big data refers to exactly. Big data has become a very popular topic in healthcare, where the term primarily refers to the vast and growing volumes of computerized medical information available in the form of electronic health records, administrative or health claims data, disease and drug monitoring registries and so on. This kind of data is generally collected routinely during administrative processes and clinical practice by different healthcare professionals: from doctors recording their patients' medical history, drug prescriptions or medical claims to pharmacists registering dispensed prescriptions. For a long time, this data accumulated without its value being fully recognized and leveraged. Today big data has an important place in healthcare, including in pharmacovigilance. The expanding role of big data in pharmacovigilance includes signal detection, substantiation and validation of drug or vaccine safety signals, and increasingly new sources of information such as social media are also being considered. The aim of the present paper is to discuss the uses of big data for drug safety post-marketing assessment.

  10. Using Multiple Sources of Information in Establishing Text Complexity. Reading Research Report. #11.03

    Science.gov (United States)

    Hiebert, Elfrieda H.

    2011-01-01

    A focus of the Common Core State Standards/English Language Arts (CCSS/ELA) is that students become increasingly more capable with complex text over their school careers. This focus has redirected attention to the measurement of text complexity. Although CCSS/ELA suggests multiple criteria for this task, the standards offer a single measure of…

  11. Cellular sources of dysregulated cytokines in relapsing-remitting multiple sclerosis

    DEFF Research Database (Denmark)

    Romme Christensen, Jeppe; Börnsen, Lars; Hesse, Dan

    2012-01-01

    Numerous cytokines are implicated in the immunopathogenesis of multiple sclerosis (MS), but studies are often limited to whole blood (WB) or peripheral blood mononuclear cells (PBMCs), thereby omitting important information about the cellular origin of the cytokines. Knowledge about the relation ...

  12. Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression

    Science.gov (United States)

    Beckstead, Jason W.

    2012-01-01

    The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic…

  13. The NCBI Taxonomy database.

    Science.gov (United States)

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  14. Radio Properties of Low Redshift Broad Line Active Galactic Nuclei Including Multiple Component Radio Sources

    Science.gov (United States)

    Rafter, Stephen E.

    2010-01-01

    We present results on the radio properties of a low redshift (z FRIIs. From these data we find an FRI/FRII luminosity dividing line like that found by Fanaroff & Riley (1974), where we use our core-only sources as proxies for FRIs, and our multi-component sources for the FRIIs. We find a bimodal distribution for the radio loudness (R = L(radio)/L(opt)) where the lower radio luminosity core-only sources appear as a population separate from the multi-component extended sources, compared with no evidence for bimodality when just the core-only sources are used. We also find that a log(R) value of 1.75 is well suited to separate the FRIs from the FRIIs, and that the R bimodality seen here is really a manifestation of the FRI/FRII break originally found by Fanaroff & Riley (1974). We find modest trends in the radio loud fraction as a function of Eddington ratio and black hole mass, where the fraction of RL AGNs decreases with increasing Eddington ratio, and increases when the black hole mass is above 2 x 108 solar masses.

  15. Estimating sediment sources by multiple scale field measurements and fingerprinting using radionuclides

    Science.gov (United States)

    Onda, Y.; Mizugaki, S.; Nanko, K.; Asai, H.

    2006-12-01

    To study the fluvial sediment sources in forested watershed in Shikoku Island, Japan, field measurements and radionuclide analysis were conducted. The observation of erosion and runoff processes were conducted in variable scale in an unmanaged Japanese cypress (Chamaecyparis obtusa) plantation catchment with splash cup, runoff plot, Parshall flumes and integrated suspended sediment samplers for 5 months. For fingerprinting of suspended sediment, Cs-137 and Pb-210ex were determined by gamma-ray spectrometry for the potential sources as the surface soil of forest floor, stream bank and skid trail, eroded sediment by splash and runoff, and fluvial sediment. The concentrations of 1 Cs-137 and Pb-210ex of fluvial sediment are found to be varied in each sampling period. Therefore, there is temporal variation of suspended sediment sources in the watershed. The contribution of forest floor as suspended sediment source was estimated as high as -77 % by 137Cs. The results suggest that forest floor should be recognized as important source of fluvial sediment in this watershed. Based on the field measurements, splash detachment and overland flow occurred during rainfall event on the hillslope, eroded the surface soil on the forest floor, and transported fine particle downslope. Overland flow on the skid trails networks can transport the forest floor sediment into the stream channel, and can result in high contribution of forest floor soil to fluvial sediment in Japanese cypress catchment.

  16. A Methodology for a Comprehensive Probabilistic Tsunami Hazard Assessment: Multiple Sources and Short-Term Interactions

    Directory of Open Access Journals (Sweden)

    Grezio Anita

    2015-01-01

    Full Text Available We propose a methodological approach for a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA, in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, the approach allows us to consider all possible tsunamigenic sources (seismic events, slides, volcanic eruptions, asteroids, etc.. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis and we demonstrate how the triggering events may induce significant temporary variations in short-term analysis of the tsunami hazard. In two target sites (the city of Naples and the island of Ischia in Italy we prove the feasibility of the TotPTHA methodology in the multi—source case considering near submarine seismic sources and submarine mass failures in the study area. The TotPTHA indicated that the tsunami hazard increases significantly by considering both the potential submarine mass failures and the submarine seismic events. Finally, the importance of the source interactions is evaluated by applying a triggering seismic event that causes relevant changes in the short-term TotPTHA.

  17. Integrity Constraint Checking in Federated Databases

    NARCIS (Netherlands)

    Grefen, Paul; Widom, Jennifer

    1996-01-01

    A federated database is comprised of multiple interconnected databases that cooperate in an autonomous fashion. Global integrity constraints are very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional const

  18. Multiple information sources and consequences of conflicting information about medicine use during pregnancy: a multinational Internet-based survey.

    Science.gov (United States)

    Hämeen-Anttila, Katri; Nordeng, Hedvig; Kokki, Esa; Jyrkkä, Johanna; Lupattelli, Angela; Vainio, Kirsti; Enlund, Hannes

    2014-02-20

    A wide variety of information sources on medicines is available for pregnant women. When using multiple information sources, there is the risk that information will vary or even conflict. The objective of this multinational study was to analyze the extent to which pregnant women use multiple information sources and the consequences of conflicting information, and to investigate which maternal sociodemographic, lifestyle, and medical factors were associated with these objectives. An anonymous Internet-based questionnaire was made accessible during a period of 2 months, on 1 to 4 Internet websites used by pregnant women in 5 regions (Eastern Europe, Western Europe, Northern Europe, Americas, Australia). A total of 7092 responses were obtained (n=5090 pregnant women; n=2002 women with a child younger than 25 weeks). Descriptive statistics and logistic regression analysis were used. Of the respondents who stated that they needed information, 16.16% (655/4054) used one information source and 83.69% (3393/4054) used multiple information sources. Of respondents who used more than one information source, 22.62% (759/3355) stated that the information was conflicted. According to multivariate logistic regression analysis, factors significantly associated with experiencing conflict in medicine information included being a mother (OR 1.32, 95% CI 1.11-1.58), having university (OR 1.33, 95% CI 1.09-1.63) or other education (OR 1.49, 95% CI 1.09-2.03), residing in Eastern Europe (OR 1.52, 95% CI 1.22-1.89) or Australia (OR 2.28, 95% CI 1.42-3.67), use of 3 (OR 1.29, 95% CI 1.04-1.60) or >4 information sources (OR 1.82, 95% CI 1.49-2.23), and having ≥2 chronic diseases (OR 1.49, 95% CI 1.18-1.89). Because of conflicting information, 43.61% (331/759) decided not to use medication during pregnancy, 30.30% (230/759) sought a new information source, 32.67% (248/759) chose to rely on one source and ignore the conflicting one, 25.03% (190/759) became anxious, and 2.64% (20/759) did

  19. Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    DEFF Research Database (Denmark)

    Mashiach, Adam; Østergaard, Jan; Zamir, Ram

    2013-01-01

    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough...... samples are received. We examine a coding scheme that combines sampling and noise-shaped quantization for a scenario in which only K sampling while others to non-uniform sampling....... This scheme achieves the optimum rate-distortion performance for uniform-sampling K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then show that by increasing the sampling rate and adding a random-binning stage, the optimal operation point is achieved for any K-set....

  20. How organic carbon derived from multiple sources contributes to carbon sequestration processes in a shallow coastal system?

    Science.gov (United States)

    Watanabe, Kenta; Kuwae, Tomohiro

    2015-04-16

    Carbon captured by marine organisms helps sequester atmospheric CO2 , especially in shallow coastal ecosystems, where rates of primary production and burial of organic carbon (OC) from multiple sources are high. However, linkages between the dynamics of OC derived from multiple sources and carbon sequestration are poorly understood. We investigated the origin (terrestrial, phytobenthos derived, and phytoplankton derived) of particulate OC (POC) and dissolved OC (DOC) in the water column and sedimentary OC using elemental, isotopic, and optical signatures in Furen Lagoon, Japan. Based on these data analysis, we explored how OC from multiple sources contributes to sequestration via storage in sediments, water column sequestration, and air-sea CO2 exchanges, and analyzed how the contributions vary with salinity in a shallow seagrass meadow as well. The relative contribution of terrestrial POC in the water column decreased with increasing salinity, whereas autochthonous POC increased in the salinity range 10-30. Phytoplankton-derived POC dominated the water column POC (65-95%) within this salinity range; however, it was minor in the sediments (3-29%). In contrast, terrestrial and phytobenthos-derived POC were relatively minor contributors in the water column but were major contributors in the sediments (49-78% and 19-36%, respectively), indicating that terrestrial and phytobenthos-derived POC were selectively stored in the sediments. Autochthonous DOC, part of which can contribute to long-term carbon sequestration in the water column, accounted for >25% of the total water column DOC pool in the salinity range 15-30. Autochthonous OC production decreased the concentration of dissolved inorganic carbon in the water column and thereby contributed to atmospheric CO2 uptake, except in the low-salinity zone. Our results indicate that shallow coastal ecosystems function not only as transition zones between land and ocean but also as carbon sequestration filters. They function

  1. Land-Use Intensity of Electricity Production: Comparison Across Multiple Sources

    Science.gov (United States)

    Swain, M.; Lovering, J.; Blomqvist, L.; Nordhaus, T.; Hernandez, R. R.

    2015-12-01

    Land is an increasingly scarce global resource that is subject to competing pressures from agriculture, human settlement, and energy development. As countries concerned about climate change seek to decarbonize their power sectors, renewable energy sources like wind and solar offer obvious advantages. However, the land needed for new energy infrastructure is also an important environmental consideration. The land requirement of different electricity sources varies considerably, but there are very few studies that offer a normalized comparison. In this paper, we use meta-analysis to calculate the land-use intensity (LUI) of the following electricity generation sources: wind, solar photovoltaic (PV), concentrated solar power (CSP), hydropower, geothermal, nuclear, biomass, natural gas, and coal. We used data from existing studies as well as original data gathered from public records and geospatial analysis. Our land-use metric includes land needed for the generation facility (e.g., power plant or wind farm) as well as the area needed to mine fuel for natural gas, coal, and nuclear power plants. Our results found the lowest total LUI for nuclear power (115 ha/TWh/y) and the highest LUI for biomass (114,817 ha/TWh/y). Solar PV and CSP had a considerably lower LUI than wind power, but both were an order of magnitude higher than fossil fuels (which ranged from 435 ha/TWh/y for natural gas to 579 ha/TWh/y for coal). Our results suggest that a large build-out of renewable electricity, though it would offer many environmental advantages over fossil fuel power sources, would require considerable land area. Among low-carbon energy sources, relatively compact sources like nuclear and solar have the potential to reduce land requirements.

  2. Integrating multiple data sources in species distribution modeling: a framework for data fusion.

    Science.gov (United States)

    Pacifici, Krishna; Reich, Brian J; Miller, David A W; Gardner, Beth; Stauffer, Glenn; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A

    2017-03-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species' occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches ("Shared," "Correlation," "Covariates") for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source ("Single"). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions. © 2016 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  3. Can we use the pharmacy data to estimate the prevalence of chronic conditions? a comparison of multiple data sources

    Directory of Open Access Journals (Sweden)

    Borgia Piero

    2011-09-01

    Full Text Available Abstract Background The estimate of the prevalence of the most common chronic conditions (CCs is calculated using direct methods such as prevalence surveys but also indirect methods using health administrative databases. The aim of this study is to provide estimates prevalence of CCs in Lazio region of Italy (including Rome, using the drug prescription's database and to compare these estimates with those obtained using other health administrative databases. Methods Prevalence of CCs was estimated using pharmacy data (PD using the Anathomical Therapeutic Chemical Classification System (ATC. Prevalences estimate were compared with those estimated by hospital information system (HIS using list of ICD9-CM diagnosis coding, registry of exempt patients from health care cost for pathology (REP and national health survey performed by the Italian bureau of census (ISTAT. Results From the PD we identified 20 CCs. About one fourth of the population received a drug for treating a cardiovascular disease, 9% for treating a rheumatologic conditions. The estimated prevalences using the PD were usually higher that those obtained with one of the other sources. Regarding the comparison with the ISTAT survey there was a good agreement for cardiovascular disease, diabetes and thyroid disorder whereas for rheumatologic conditions, chronic respiratory illnesses, migraine and Alzheimer's disease, the prevalence estimates were lower than those estimated by ISTAT survey. Estimates of prevalences derived by the HIS and by the REP were usually lower than those of the PD (but malignancies, chronic renal diseases. Conclusion Our study showed that PD can be used to provide reliable prevalence estimates of several CCs in the general population.

  4. Contamination event detection using multiple types of conventional water quality sensors in source water.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chen, Lei

    2014-08-01

    Early warning systems are often used to detect deliberate and accidental contamination events in a water system. Conventional methods normally detect a contamination event by comparing the predicted and observed water quality values from one sensor. This paper proposes a new method for event detection by exploring the correlative relationships between multiple types of conventional water quality sensors. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory. Results from these experiments demonstrated the correlative responses of multiple types of sensors. It was observed that the proposed method could detect a contamination event 9 minutes after the introduction of lead nitrate solution with a concentration of 0.01 mg L(-1). The proposed method employs three parameters. Their impact on the detection performance was also analyzed. The initial analysis showed that the correlative response is contaminant-specific, which implies that it can be utilized not only for contamination detection, but also for contaminant identification.

  5. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  6. Capturing domain knowledge from multiple sources: the rare bone disorders use case

    OpenAIRE

    Groza, Tudor; Tudorache, Tania; Peter N Robinson; Zankl, Andreas

    2015-01-01

    Background Lately, ontologies have become a fundamental building block in the process of formalising and storing complex biomedical information. The community-driven ontology curation process, however, ignores the possibility of multiple communities building, in parallel, conceptualisations of the same domain, and thus providing slightly different perspectives on the same knowledge. The individual nature of this effort leads to the need of a mechanism to enable us to create an overarching and...

  7. On the Exposure Limits for Extended Source Multiple Pulse Laser Exposures

    Science.gov (United States)

    2013-08-01

    Electrotechnical Commission (IEC) 60825-1, respectively], and of the International Commission on NonIonizing Radiation Protection (ICNIRP) laser exposure limit...of the three rules will be less intuitive. This paper presents an analysis of the proposed expo- sure limits for multiple pulse laser exposure to the...similar manner to the classification of laser products. The main goal of this paper is to identify and discuss cri- teria that can identify which of

  8. Choosing and Using Multiple Information Sources: Some New Findings and Emergent Issues

    Science.gov (United States)

    Goldman, Susan R.

    2011-01-01

    This commentary highlights important contributions of four empirical investigations of how people sort through the masses of information that are now available to them and find what they need to solve some problem or make some decision. These studies demonstrate the impact of differences among information sources and the role that these…

  9. Choosing and Using Multiple Information Sources: Some New Findings and Emergent Issues

    Science.gov (United States)

    Goldman, Susan R.

    2011-01-01

    This commentary highlights important contributions of four empirical investigations of how people sort through the masses of information that are now available to them and find what they need to solve some problem or make some decision. These studies demonstrate the impact of differences among information sources and the role that these…

  10. Memory for Textual Conflicts Predicts Sourcing When Adolescents Read Multiple Expository Texts

    Science.gov (United States)

    Stang Lund, Elisabeth; Bråten, Ivar; Brante, Eva W.; Strømsø, Helge I.

    2017-01-01

    This study investigated whether memory for conflicting information predicted mental representation of source-content links (i.e., who said what) in a sample of 86 Norwegian adolescent readers. Participants read four texts presenting conflicting claims about sun exposure and health. With differences in gender, prior knowledge, and interest…

  11. Trust and Mistrust when Students Read Multiple Information Sources about Climate Change

    Science.gov (United States)

    Braten, Ivar; Stromso, Helge I.; Salmeron, Ladislao

    2011-01-01

    The present study investigated how undergraduates judged the trustworthiness of different information sources that they read about climate change. Results showed that participants (N = 128) judged information from textbook and official documents to be more trustworthy than information from newspapers and a commercial agent. Moreover, participants…

  12. Evaluation of Personal and Built Environment Attributes to Physical Activity: A Multilevel Analysis on Multiple Population-Based Data Sources

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2012-01-01

    Full Text Available Background. Studies have documented that built environment factors potentially promote or impede leisure time physical activity (LTPA. This study explored the relationship between multiple built environment factors and individual characteristics on LTPA. Methods. Multiple data sources were utilized including individual level data for health behaviors and health status from the Nevada Behavioral Risk Factor Surveillance System (BRFSS and community level data from different data sources including indicators for recreation facilities, safety, air quality, commute time, urbanization, population density, and land mix level. Mixed model logistic regression and geographic information system (GIS spatial analysis were conducted. Results. Among 6,311 respondents, 24.4% reported no LTPA engagement during the past 30 days. No engagement in LTPA was significantly associated with (1 individual factors: older age, less education, lower income, being obesity, and low life satisfaction and (2 community factors: more commute time, higher crime rate, urban residence, higher population density, but not for density and distance to recreation facilities, air quality, and land mix. Conclusions. Multiple data systems including complex population survey and spatial analysis are valuable tools on health and built environment studies.

  13. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    Science.gov (United States)

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  14. Pesticide pollution of multiple drinking water sources in the Mekong Delta, Vietnam: evidence from two provinces.

    Science.gov (United States)

    Chau, N D G; Sebesvari, Z; Amelung, W; Renaud, F G

    2015-06-01

    Pollution of drinking water sources with agrochemicals is often a major threat to human and ecosystem health in some river deltas, where agricultural production must meet the requirements of national food security or export aspirations. This study was performed to survey the use of different drinking water sources and their pollution with pesticides in order to inform on potential exposure sources to pesticides in rural areas of the Mekong River delta, Vietnam. The field work comprised both household surveys and monitoring of 15 frequently used pesticide active ingredients in different water sources used for drinking (surface water, groundwater, water at public pumping stations, surface water chemically treated at household level, harvested rainwater, and bottled water). Our research also considered the surrounding land use systems as well as the cropping seasons. Improper pesticide storage and waste disposal as well as inadequate personal protection during pesticide handling and application were widespread amongst the interviewed households, with little overall risk awareness for human and environmental health. The results show that despite the local differences in the amount and frequency of pesticides applied, pesticide pollution was ubiquitous. Isoprothiolane (max. concentration 8.49 μg L(-1)), fenobucarb (max. 2.32 μg L(-1)), and fipronil (max. 0.41 μg L(-1)) were detected in almost all analyzed water samples (98 % of all surface samples contained isoprothiolane, for instance). Other pesticides quantified comprised butachlor, pretilachlor, propiconazole, hexaconazole, difenoconazole, cypermethrin, fenoxapro-p-ethyl, tebuconazole, trifloxystrobin, azoxystrobin, quinalphos, and thiamethoxam. Among the studied water sources, concentrations were highest in canal waters. Pesticide concentrations varied with cropping season but did not diminish through the year. Even in harvested rainwater or purchased bottled water, up to 12 different pesticides were detected at

  15. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  16. Proceedings of the 4th MultiClust Workshop on Multiple Clusterings, Multi-view Data, and Multi-source Knowledge-driven Clustering

    DEFF Research Database (Denmark)

    Cluster detection is a very traditional data analysis task with several decades of research. However, it also includes a large variety of different subtopics investigated by different communities such as data mining, machine learning, statistics, and database systems. "Multiple Clusterings, Multi...

  17. A Semantic-based Clustering Method to Build Domain Ontology from Multiple Heterogeneous Knowledge Sources

    Institute of Scientific and Technical Information of China (English)

    LING Ling; HU Yu-jin; WANG Xue-lin; LI Cheng-gang

    2006-01-01

    In order to improve the efficiency of ontology construction from heterogeneous knowledge sources, a semantic-based approach is presented. The ontology will be constructed with the application of cluster technique in an incremental way.Firstly, terms will be extracted from knowledge sources and congregate a term set after pretreat-ment. Then the concept set will be built via semantic-based clustering according to semanteme of terms provided by WordNet. Next, a concept tree is constructed in terms of mapping rules between semanteme relationships and concept relationships. The semi-automatic approach can avoid non-consistence due to knowledge engineers having different understanding of the same concept and the obtained ontology is easily to be expanded.

  18. A Comprehensive Probabilistic Tsunami Hazard Assessment: Multiple Sources and Short-Term Interactions

    Science.gov (United States)

    Anita, G.; Selva, J.; Laura, S.

    2011-12-01

    We develop a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA), in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, such an innovative approach allows, in principle, to consider all possible tsunamigenic sources, from seismic events, to slides, asteroids, volcanic eruptions, etc. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis. We demonstrate how external triggering events may induce significant temporary variations in the tsunami hazard. Because of this, such effects should always be considered, at least in short-term applications, to obtain unbiased analyses. Finally, we prove the feasibility of the TotPTHA and of the treatment of interaction/cascade effects by applying this methodology to an ideal region with realistic characteristics (Neverland).

  19. Eutrophication assessment and management methodology of multiple pollution sources of a landscape lake in North China.

    Science.gov (United States)

    Chen, Yanxi; Niu, Zhiguang; Zhang, Hongwei

    2013-06-01

    Landscape lakes in the city suffer high eutrophication risk because of their special characters and functions in the water circulation system. Using a landscape lake HMLA located in Tianjin City, North China, with a mixture of point source (PS) pollution and non-point source (NPS) pollution, we explored the methodology of Fluent and AQUATOX to simulate and predict the state of HMLA, and trophic index was used to assess the eutrophication state. Then, we use water compensation optimization and three scenarios to determine the optimal management methodology. Three scenarios include ecological restoration scenario, best management practices (BMPs) scenario, and a scenario combining both. Our results suggest that the maintenance of a healthy ecosystem with ecoremediation is necessary and the BMPs have a far-reaching effect on water reusing and NPS pollution control. This study has implications for eutrophication control and management under development for urbanization in China.

  20. Using multiple isotopes to understand the source of ingredients used in golden beverages

    Science.gov (United States)

    Wynn, J. G.

    2011-12-01

    Traditionally, beer contains 4 simple ingredients: water, barley, hops and yeast. Each of these ingredients used in the brewing process contributes some combination of a number of "traditional" stable isotopes (i.e., isotopes of H, C, O, N and S) to the final product. As an educational exercise in an "Analytical Techniques in Geology" course, a group of students analyzed the isotopic composition of the gas, liquid and solid phases of a variety of beer samples collected from throughout the world (including other beverages). The hydrogen and oxygen isotopic composition of the water followed closely the isotopic composition of local meteoric water at the source of the brewery, although there is a systematic offset from the global meteoric water line that may be due to the effects of CO2-H2O equilibration. The carbon isotopic composition of the CO2 reflected that of the solid residue (the source of carbon used as a fermentation substrate), but may potentially be modified by addition of gas-phase CO2 from an inorganic source. The carbon isotopic composition of the solid residue similarly tracks that of the fermentation substrate, and may indicate some alcohol fermented from added sugars in some cases. The nitrogen isotopic composition of the solid residue was relatively constant, and may track the source of nitrogen in the barley, hops and yeast. Each of the analytical methods used is a relatively standard technique used in geological applications, making this a "fun" exercise for those involved, and gives the students hands-on experience with a variety of analytes from a non-traditional sample material.

  1. A probabilistic graphical model approach in 30 m land cover mapping with multiple data sources

    OpenAIRE

    Wang, Jie; Ji, Luyan; Huang, Xiaomeng; Fu, Haohuan; Xu, Shiming; Li, Congcong

    2016-01-01

    There is a trend to acquire high accuracy land-cover maps using multi-source classification methods, most of which are based on data fusion, especially pixel- or feature-level fusions. A probabilistic graphical model (PGM) approach is proposed in this research for 30 m resolution land-cover mapping with multi-temporal Landsat and MODerate Resolution Imaging Spectroradiometer (MODIS) data. Independent classifiers were applied to two single-date Landsat 8 scenes and the MODIS time-series data, ...

  2. True lemurs…true species - species delimitation using multiple data sources in the brown lemur complex

    OpenAIRE

    2013-01-01

    Background Species are the fundamental units in evolutionary biology. However, defining them as evolutionary independent lineages requires integration of several independent sources of information in order to develop robust hypotheses for taxonomic classification. Here, we exemplarily propose an integrative framework for species delimitation in the “brown lemur complex” (BLC) of Madagascar, which consists of seven allopatric populations of the genus Eulemur (Primates: Lemuridae), which were s...

  3. An Adaptable Multiple Power Source for Mass Spectrometry and other Scientific Instruments

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Tzu-Yung; Anderson, Gordon A.; Norheim, Randolph V.; Prost, Spencer A.; Lamarche, Brian L.; Leach, Franklin E.; Auberry, Kenneth J.; Smith, Richard D.; Koppenaal, David W.; Robinson, Errol W.; Pasa-Tolic, Ljiljana

    2015-09-18

    Power supplies are commonly used in the operation of many types of scientific equipment, including mass spectrometers and ancillary instrumentation. A generic modern mass spectrometer comprises an ionization source, such as electrospray ionization (ESI), ion transfer devices such as ion funnels and multipole ion guides, and ion signal detection apparatus. Very often such platforms include, or are interfaced with ancillary elements in order to manipulate samples before or after ionization. In order to operate such scientific instruments, numerous direct current (DC) channels and radio frequency (RF) signals are required, along with other controls such as temperature regulation. In particular, DC voltages in the range of ±400 V, along with MHz range RF signals with peak-to-peak amplitudes in the hundreds of volts range are commonly used to transfer ionized samples under vacuum. Additionally, an ESI source requires a high voltage (HV) DC source capable of producing several thousand volts and heaters capable of generating temperatures up to 300°C. All of these signals must be properly synchronized and managed in order to carry out ion trapping, accumulation and detection.

  4. Identification of Multiple Subtypes of Campylobacter jejuni in Chicken Meat and the Impact on Source Attribution

    Directory of Open Access Journals (Sweden)

    John A. Hudson

    2013-09-01

    Full Text Available Most source attribution studies for Campylobacter use subtyping data based on single isolates from foods and environmental sources in an attempt to draw epidemiological inferences. It has been suggested that subtyping only one Campylobacter isolate per chicken carcass incurs a risk of failing to recognise the presence of clinically relevant, but numerically infrequent, subtypes. To investigate this, between 21 and 25 Campylobacter jejuni isolates from each of ten retail chicken carcasses were subtyped by pulsed-field gel electrophoresis (PFGE using the two restriction enzymes SmaI and KpnI. Among the 227 isolates, thirteen subtypes were identified, the most frequently occurring subtype being isolated from three carcasses. Six carcasses carried a single subtype, three carcasses carried two subtypes each and one carcass carried three subtypes. Some subtypes carried by an individual carcass were shown to be potentially clonally related. Comparison of C. jejuni subtypes from chickens with isolate subtypes from human clinical cases (n = 1248 revealed seven of the thirteen chicken subtypes were indistinguishable from human cases. None of the numerically minor chicken subtypes were identified in the human data. Therefore, typing only one Campylobacter isolate from individual chicken carcasses may be adequate to inform Campylobacter source attribution.

  5. Sourcing sediment using multiple tracers in the catchment of Lake Argyle, Northwestern Australia.

    Science.gov (United States)

    Wasson, R J; Caitcheon, Gary; Murray, Andrew S; McCulloch, Malcolm; Quade, Jay

    2002-05-01

    Control of sedimentation in large reservoirs requires soil conservation at the catchment scale. In large, heterogeneous catchments, soil conservation planning needs to be based on sound information, and set within the framework of a sediment budget to ensure that all of the potentially significant sources and sinks are considered. The major sources of sediment reaching the reservoir, Lake Argyle, in tropical northwestern Australia, have been determined by combining measured sediment fluxes in rivers with spatial tracer-based estimates of proportional contributions from tributaries of the main stream entering the lake, the Ord River. The spatial tracers used are mineral particle magnetics, the strontium isotopic ratio, and the neodymium isotopic ratio. Fallout of 137Cs has been used to estimate the proportion of the sediment in Lake Argyle eroded from surface soils by sheet and rill erosion, and, by difference, the proportion eroded from subsurface soils by gully and channel erosion. About 96% of the sediment in the reservoir has come from less than 10% of the catchment, in the area of highly erodible soils formed on Cambrian-age sedimentary rocks. About 80% of the sediment in the reservoir has come from gully and channel erosion. A major catchment revegetation program, designed to slow sedimentation in the reservoir, appears to have had little effect because it did not target gullies, the major source of sediment. Had knowledge of the sediment budget been available before the revegetation program was designed, an entirely different approach would have been taken.

  6. Human responses to multiple sources of directional information in virtual crowd evacuations.

    Science.gov (United States)

    Bode, Nikolai W F; Kemloh Wagoum, Armel U; Codling, Edward A

    2014-02-06

    The evacuation of crowds from buildings or vehicles is one example that highlights the importance of understanding how individual-level interactions and decision-making combine and lead to the overall behaviour of crowds. In particular, to make evacuations safer, we need to understand how individuals make movement decisions in crowds. Here, we present an evacuation experiment with over 500 participants testing individual behaviour in an interactive virtual environment. Participants had to choose between different exit routes under the influence of three different types of directional information: static information (signs), dynamic information (movement of simulated crowd) and memorized information, as well as the combined effect of these different sources of directional information. In contrast to signs, crowd movement and memorized information did not have a significant effect on human exit route choice in isolation. However, when we combined the latter two treatments with additional directly conflicting sources of directional information, for example signs, they showed a clear effect by reducing the number of participants that followed the opposing directional information. This suggests that the signals participants observe more closely in isolation do not simply overrule alternative sources of directional information. Age and gender did not consistently explain differences in behaviour in our experiments.

  7. Supplier selection and order splitting in multiple-sourcing inventory systems

    Institute of Scientific and Technical Information of China (English)

    Guicong WANG; Zhaoliang JIANG; Zhaoqian LI; Wenping LIU

    2008-01-01

    Supplier selection and inventory control are critical decision processes in single-item multiple-supplier systems. An integer linear programming model is proposed to help managers determine the reorder level, choose the best suppliers, and place the optimum order quantities so that the total average inventory cost is minimum, and constraints of supplier ability, quality, and demand are considered. An algorithm combining the branch-bound algorithm and enumeration algorithm is developed to solve the problems. Application of the proposed model in an automobile industry shows that it is effective.

  8. Multiple remote sensing data sources to assess spatio-temporal patterns of fire incidence over Campos Amazônicos Savanna Vegetation Enclave (Brazilian Amazon).

    Science.gov (United States)

    Alves, Daniel Borini; Pérez-Cabello, Fernando

    2017-12-01

    Fire activity plays an important role in the past, present and future of Earth system behavior. Monitoring and assessing spatial and temporal fire dynamics have a fundamental relevance in the understanding of ecological processes and the human impacts on different landscapes and multiple spatial scales. This work analyzes the spatio-temporal distribution of burned areas in one of the biggest savanna vegetation enclaves in the southern Brazilian Amazon, from 2000 to 2016, deriving information from multiple remote sensing data sources (Landsat and MODIS surface reflectance, TRMM pluviometry and Vegetation Continuous Field tree cover layers). A fire scars database with 30 m spatial resolution was generated using a Landsat time series. MODIS daily surface reflectance was used for accurate dating of the fire scars. TRMM pluviometry data were analyzed to dynamically establish time limits of the yearly dry season and burning periods. Burned area extent, frequency and recurrence were quantified comparing the results annually/seasonally. Additionally, Vegetation Continuous Field tree cover layers were used to analyze fire incidence over different types of tree cover domains. In the last seventeen years, 1.03millionha were burned within the study area, distributed across 1432 fire occurrences, highlighting 2005, 2010 and 2014 as the most affected years. Middle dry season fires represent 86.21% of the total burned areas and 32.05% of fire occurrences, affecting larger amount of higher density tree surfaces than other burning periods. The results provide new insights into the analysis of burned areas of the neotropical savannas, spatially and statistically reinforcing important aspects linked to the seasonality patterns of fire incidence in this landscape. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Origin of Bright X-Ray Sources in Multiple Stars

    Energy Technology Data Exchange (ETDEWEB)

    Makarov, V V; Eggleton, P P

    2009-04-23

    Luminous X-ray stars are very often found in visual double or multiple stars. Binaries with periods of a few days possess the highest degree of coronal X-ray activity among regular, non-relativistic stars. But the orbital periods in visual double stars are too large for any direct interaction between the companions to take place. We suggest that most of the strongest X-ray components in resolved binaries are yet-undiscovered short-period binaries, and that a few are merged remnants of such binaries. The omnipresence of short-period active stars, e.g. of BY-Dra-type binaries, in multiple systems is explained via the dynamical evolution of triple stars with large mutual inclinations. The dynamical perturbation on the inner pair pumps up the eccentricity in a cyclic manner, a phenomenon known as Kozai cycling. At times of close periapsis, tidal friction reduces the angular momentum of the binary, causing it to shrink. When the orbital period of the inner pair drops to a few days, fast surface rotation of the companions is driven by tidal forces, boosting activity by a few orders of magnitude. If the period drops still further, a merger may take place leaving a rapidly-rotating active dwarf with only a distant companion.

  10. The Danish Multiple Sclerosis Treatment Register

    DEFF Research Database (Denmark)

    Magyari, Melinda; Koch-Henriksen, Nils; Sørensen, Per Soelberg

    2016-01-01

    AIM OF THE DATABASE: The Danish Multiple Sclerosis Treatment Register (DMSTR) serves as a clinical quality register, enabling the health authorities to monitor the quality of the disease-modifying treatment, and it is an important data source for epidemiological research. STUDY POPULATION......: The DMSTR includes all patients with multiple sclerosis who had been treated with disease-modifying drugs since 1996. At present, more than 8,400 patients have been registered in this database. Data are continuously entered online into a central database from all sites in Denmark at start and at regular...

  11. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  12. Multiple sources driving the organic matter dynamics in two contrasting tropical mangroves.

    Science.gov (United States)

    Ray, R; Shahraki, M

    2016-11-15

    In this study, we have selected two different mangroves based on their geological, hydrological and climatological variations to investigate the origin (terrestrial, phytobenthos derived, and phytoplankton derived) of dissolved organic carbon (DOC), particulate organic carbon (POC) in the water column and the sedimentary OC using elemental ratios and stable isotopes. Qeshm Island, representing the Iranian mangroves received no attention before this study in terms of DOC, POC biogeochemistry and their sources unlike the Sundarbans (Indian side), the world's largest mangrove system. Slightly higher DOC concentrations in the Iranian mangroves were recorded in our field campaigns between 2011 and 2014, compared to the Sundarbans (315±25μM vs. 278±42μM), owing to the longer water residence times, while 9-10 times greater POC concentration (303±37μM, n=82) was linked to both suspended load (345±104mgL(-1)) and high algal production. Yearlong phytoplankton bloom in the mangrove-lined Persian Gulf was reported to be the perennial source of both POC and DOC contributing 80-86% to the DOC and 90-98% to the POC pool. Whereas in the Sundarbans, riverine input contributed 50-58% to the DOC pool and POC composition was regulated by the seasonal litter fall, river discharge and phytoplankton production. Algal derived organic matter (microphytobenthos) represented the maximum contribution (70-76%) to the sedimentary OC at Qeshm Island, while mangrove leaf litters dominated the OC pool in the Indian Sundarbans. Finally, hydrographical settings (i.e. riverine transport) appeared to be the determinant factor in differentiating OM sources in the water column between the dry and wet mangroves.

  13. Multi-source feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data.

    Science.gov (United States)

    Yuan, Lei; Wang, Yalin; Thompson, Paul M; Narayan, Vaibhav A; Ye, Jieping

    2012-07-02

    Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI's 780 participants (172AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results.

  14. IMPLEMENTATION OF CENTRAL QUEUE BASED REALTIME SCHEDULER FOR MULTIPLE SOURCE DATA STREAMING

    Directory of Open Access Journals (Sweden)

    V. Kaviha

    2014-01-01

    Full Text Available Real-time data packet sources are required to remain robust against different security threats. This study proposes a real-time secure scheduling strategy for data transmission to enhance the communication throughput and reduce the overheads. The proposed system combines real-time scheduling with security service enhancement, error detection and realtime scheduler based on EDF algorithm using uc/os-II real time operating system, implemented on cortex M3 processor. The scheduling unit uses central queue management model and the security enhancement scheme adopts a blowfish encryption mechanism.

  15. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  16. Control and Management of Electrical Propulsion and Optimal Consumption despite Multiple Electrical and Hybrid Sources

    Directory of Open Access Journals (Sweden)

    Ali Reza Pakkhesal

    2015-03-01

    Full Text Available Despite the high potential of distributed and renewable sources, their operation may cause problems because of their variability. Moreover, wind fluctuations or extreme weather changes may lead to temporary voltage fluctuations. Researches show that the energy storage can compensate this random nature effect and also it can be effective in a short duration, without requiring the load cut-off. Furthermore, utilizing the energy storing instruments provides more suitable conditions to use produced power and it can be considered as an economic solution. Therefore, in this paper a Smart Energy Management System has been designed in order to optimize the operation of a sample system, production planning, and energy storage. This study suggests the optimized method which can determine the optimized point depending on different goals and their relative effective coefficients. In this method, the usage time and the amount of usage of different energy sources have been determined so that the lowest cost and minimum environmental pollution has been achieved based on Pareto optimization. Eventually, in order to validate the proposed algorithm, this method has been implemented on an electrical propulsion sample system by MATLAB & GAMS software and related results are discussed.

  17. Needs Analysis: Investigating Students’ Self-directed Learning Needs Using Multiple Data Sources

    Directory of Open Access Journals (Sweden)

    Keiko Takahashi

    2013-09-01

    Full Text Available The learning advisor (LA team at Kanda University of International Studies (KUIS has engaged in redesigning a curriculum for the Self-Access Learning Centre (SALC by following a framework adapted from the Nation and Macalister (2010 model. This framework, which is based on an investigation of student needs, aims to establish criteria in the shape of clear principles and goals. Following the Environment Analysis stage, detailed in the previous installment of this column (Thornton, 2013, this paper describes the needs analysis stage which was undertaken in 2012. Long (2005 emphasizes the importance of triangulating needs analysis data, and discusses a number of sources that may be consulted to establish a comprehensive picture of needs. In the KUIS context, the LA team identified four major stakeholders in the SALC curriculum as sources of information for needs analysis: LAs, students, teachers and the university senior management team. In order to conduct a thorough needs analysis to guide curriculum evaluation and design, the LA team decided to investigate each stakeholder group’s perceptions of students’ self-directed learning (SDL needs. This second installment showcases each research project, and demonstrates how the data from the four projects were collated in order to discover freshman student SDL needs, resulting in a document of Learning Outcomes for the future curriculum.

  18. Exploring multiple sources of climatic information within personal and medical diaries, Bombay 1799-1828

    Science.gov (United States)

    Adamson, George

    2016-04-01

    Private diaries are being recognised as an important source of information on past climatic conditions, providing place-specific, often daily records of meteorological information. As many were not intended for publication, or indeed to be read by anyone other than the author, issues of observer bias are lower than some other types of documentary sources. This paper comprises an exploration of the variety of types of climatic information can be mined from a single document or set of documents. The focus of the analysis is three private and one medical diary kept by British colonists in Bombay, western India, during the first decades of the nineteenth century. The paper discusses the potential of the diaries for reconstruction of precipitation, temperature and extreme events. Ad-hoc temperature observations collected by the four observers prove to be particularly fruitful for reconstructing monthly extreme temperatures, with values comparable to more systematic observations collected during the period. This leads to a tentative conclusion that extreme temperatures in Bombay were around 5°C lower during the period than today, a difference likely predominantly attributable to the urban heat island effect.

  19. A Database Indexing Algorithm Based on Introduction of Multi-source Data Phase Spectrum Compensation%引入多源数据相位谱补偿的数据库索引算法

    Institute of Scientific and Technical Information of China (English)

    王小琼; 王艳淑

    2015-01-01

    对层次网络数据库的敏感信息快速索引是提高数据库访问技术的基础,传统方法采用矢量模型特征聚类算法进行数据库敏感信息特征提取和索引,当数据库中的信息呈现多源化状态时,数据库索引精度不高.提出一种基于多源数据相位谱补偿的数据库索引算法.构建多源数据库模型,进行数据库访问信道分配设计,分析多源数据的相位谱特征,进行相位谱补偿实现数据库索引算法优化,仿真结果表明,采用该算法对含有多源信息特征的数据库进行信息检索和访问,信息匹配准确度较高,特征提取准确,提高数据库访问性能.%Rapid sensitive information indexing of the hierarchical network database is foundation to improve the database access technology, the traditional method using the feature vector model clustering algorithm for database of sensitive infor-mation feature extraction and indexing, when show multi-source information in the database, database indexing accuracy is not high. A database index algorithm based on the phase spectrum compensation of multi-source data is proposed. Con-struction of the database model of multi-source, database access channel assignment design, analysis of multi-source data of phase spectrum characteristics and phase spectrum compensation to realize the optimization of the indexing algorithm. Simulation results show that the using the algorithm of multi-source information database containing information retrieval and access, accurate information matching, feature extraction accuracy, improve the performance of database access.

  20. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  1. Onzekere databases

    NARCIS (Netherlands)

    van Keulen, Maurice

    Een recente ontwikkeling in het databaseonderzoek betret zogenaamde 'onzekere databases'. Dit artikel beschrijft wat onzekere databases zijn, hoe ze gebruikt kunnen worden en welke toepassingen met name voordeel zouden kunnen hebben van deze technologie.

  2. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  3. System simulation method for fiber-based homodyne multiple target interferometers using short coherence length laser sources

    Science.gov (United States)

    Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm

    2015-09-01

    Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.

  4. A Multiple Source Approach to Organisational Justice: The Role of the Organisation, Supervisors, Coworkers, and Customers

    Directory of Open Access Journals (Sweden)

    Agustin Molina

    2015-07-01

    Full Text Available The vast research on organisational justice has focused on the organisation and the supervisor. This study aims to further this line of research by integrating two trends within organisational justice research: the overall approach to justice perceptions and the multifoci perspective of justice judgments. Specifically, this study aims to explore the effects of two additional sources of justice, coworker-focused justice and customer-focused justice, on relevant employees’ outcomes—burnout, turnover intentions, job satisfaction, and workplace deviance— while controlling the effect of organisation-focused justice and supervisor-focused justice. Given the increased importance attributed to coworkers and customers, we expect coworker-focused justice and customer-focused justice to explain incremental variance in the measured outcomes, above and beyond the effects of organisation-focused justice and supervisor-focused justice. Participants will be university students from Austria and Germany employed by service organisations. Data analysis will be conducted using structural equation modeling.

  5. Evaluating the Sources of Uncertainties in the Measurements from Multiple Pyranometers and Pyrheliometers

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Dooraghi, Mike; Reda, Ibrahim; Kutchenreiter, Mark

    2017-03-13

    Traceable radiometric data sets are essential for validating climate models, validating satellite-based models for estimating solar resources, and validating solar radiation forecasts. The current state-of-the-art radiometers have uncertainties in the range from 2% - 5% and sometimes more [1]. The National Renewable Energy Laboratory (NREL) and other organizations are identifying uncertainties and improving radiometric measurement performance and developing a consensus methodology for acquiring radiometric data. This study analyzes the impact of differing specifications -- such as cosine response, thermal offset, spectral response, and others -- on the accuracy of radiometric data for various radiometers. The study will also provide insight on how to perform a measurement uncertainty analysis and how to reduce the impact of some of the sources of uncertainties.

  6. Oil seepage onshore West Greenland: evidence of multiple source rocks and oil mixing

    Energy Technology Data Exchange (ETDEWEB)

    Bojesen-Koefoed, J.; Christiansen, F.G.; Nytoft, H.P.; Pedersen, A.K.

    1998-08-01

    Widespread oil seepage and staining are observed in lavas and hyaloclastites in the lower part of the volcanic succession on northwestern Disko and western Nuussuaq, central West Greenland. Chemical analyses suggest the existence of several petroleum systems in the underlying Cretaceous and Paleocene fluviodeltaic to marine sediments. Seepage and staining commonly occur within vesicular lava flow tops, and are often associated with mineral veins (mostly carbonates) in major fracture systems. Organic geochemical analyses suggest the existence of at least five distinct oil types: (1) a waxy oil, which on the basis of the presence of abundant angiosperm biological markers, is interpreted as generated from Paleocene mud-stones (the `Marraat type`); (2) a waxy oil, probably generated from coals and shales of the Cretaceous Atane formation (the `Kuugannguaq type`); (3) a low to moderately waxy oil containing 28,30-bisnorhopane, and abundant C{sub 27}-diasteranes and regular steranes (the `Itilli type`), possibly generated from presently unknown Cenomanian-Turonian marine modstones; (4) a low wax oil of marine, possibly lagoonal/saline lacustrine origin, containing ring-A methylated steranes and a previously unknown series of extended 28-norhopanes (the `Eqalulik type`); (5) a waxy oil with biological marker characteristics different from both the Kuugannguaq and Marraat oil types (the `Niaqornaarsuk type`), probably generated from Campanian mud-stones. The presence of widespread seepage and staining originating from several source rocks is encouraging for exploration in basins both on- and offshore western Greenland, where the existence of prolific source rocks has previously been the main exploration risk. (au) EFP-96. 34 refs.

  7. Scenario Based Approach for Multiple Source Tsunami Hazard Assessment for Sines, Portugal

    Science.gov (United States)

    Wronna, Martin; Omira, Rachid; Baptista, Maria Ana

    2015-04-01

    In this paper, we present a scenario-based approach for tsunami hazard assessment for the city and harbour of Sines, Portugal one the test-sites of project ASTARTE. Sines holds one of the most important deep-water ports which contains oil-bearing, petrochemical, liquid bulk, coal and container terminals. The port and its industrial infrastructures are facing the ocean to the southwest facing the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, a total of five scenarios were selected to assess tsunami impact at the test site. These scenarios correspond to the worst-case credible scenario approach based upon the largest events of the historical and paleo tsunami catalogues. The tsunami simulations from the source area towards the coast is carried out using NSWING a Non-linear Shallow Water Model With Nested Grids. The code solves the non-linear shallow water equations using the discretization and explicit leap-frog finite difference scheme, in a Cartesian or Spherical frame. The initial sea surface displacement is assumed to be equal to the sea bottom deformation that is computed by Okada equations. Both uniform and non-uniform slip conditions are used. The presented results correspond to the models using non-uniform slip conditions. In this study, the static effect of tides is analysed for three different tidal stages MLLW (mean lower low water) MSL (mean sea level) and MHHW (mean higher high water). For each scenario, inundation is described by maximum values of wave height, flow depth, drawdown, run-up and inundation distance. Synthetic waveforms are computed at virtual tide gages at specific locations outside and inside the harbour. The final results consist of Aggregate Scenario Maps presented for the different inundation parameters. This work is funded by ASTARTE - Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839

  8. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    Science.gov (United States)

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-11-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.

  9. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  10. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  11. Evolution of urea transporters in vertebrates: adaptation to urea's multiple roles and metabolic sources.

    Science.gov (United States)

    LeMoine, Christophe M R; Walsh, Patrick J

    2015-06-01

    In the two decades since the first cloning of the mammalian kidney urea transporter (UT-A), UT genes have been identified in a plethora of organisms, ranging from single-celled bacteria to metazoans. In this review, focusing mainly on vertebrates, we first reiterate the multiple catabolic and anabolic pathways that produce urea, then we reconstruct the phylogenetic history of UTs, and finally we examine the tissue distribution of UTs in selected vertebrate species. Our analysis reveals that from an ancestral UT, three homologues evolved in piscine lineages (UT-A, UT-C and UT-D), followed by a subsequent reduction to a single UT-A in lobe-finned fish and amphibians. A later internal tandem duplication of UT-A occurred in the amniote lineage (UT-A1), followed by a second tandem duplication in mammals to give rise to UT-B. While the expected UT expression is evident in excretory and osmoregulatory tissues in ureotelic taxa, UTs are also expressed ubiquitously in non-ureotelic taxa, and in tissues without a complete ornithine-urea cycle (OUC). We posit that non-OUC production of urea from arginine by arginase, an important pathway to generate ornithine for synthesis of molecules such as polyamines for highly proliferative tissues (e.g. testis, embryos), and neurotransmitters such as glutamate for neural tissues, is an important evolutionary driving force for the expression of UTs in these taxa and tissues.

  12. Bayesian inference based modelling for gene transcriptional dynamics by integrating multiple source of knowledge

    Directory of Open Access Journals (Sweden)

    Wang Shu-Qiang

    2012-07-01

    Full Text Available Abstract Background A key challenge in the post genome era is to identify genome-wide transcriptional regulatory networks, which specify the interactions between transcription factors and their target genes. Numerous methods have been developed for reconstructing gene regulatory networks from expression data. However, most of them are based on coarse grained qualitative models, and cannot provide a quantitative view of regulatory systems. Results A binding affinity based regulatory model is proposed to quantify the transcriptional regulatory network. Multiple quantities, including binding affinity and the activity level of transcription factor (TF are incorporated into a general learning model. The sequence features of the promoter and the possible occupancy of nucleosomes are exploited to estimate the binding probability of regulators. Comparing with the previous models that only employ microarray data, the proposed model can bridge the gap between the relative background frequency of the observed nucleotide and the gene's transcription rate. Conclusions We testify the proposed approach on two real-world microarray datasets. Experimental results show that the proposed model can effectively identify the parameters and the activity level of TF. Moreover, the kinetic parameters introduced in the proposed model can reveal more biological sense than previous models can do.

  13. Construction of East China unconventional oil and gas source database system%华东非常规油气源头数据库系统建设

    Institute of Scientific and Technical Information of China (English)

    敬钟伟

    2013-01-01

    With the continuous development of the unconventional business of East China company of SINOPEC, oil comprehen⁃sive research, production runing, and operation and management develop towards a deeper and finer direction, and the demands of information expect“complete and unified, accurate and timely”more. After informatization construction, East China company es⁃tablish part of the specialized database and data application system to support the management research and production in oil field well. But as with the conventional oil and gas field, the business which involves in several fields of the exploration and development such as geophysical exploration, logging, analysis and testing, is without comprehensive traffic grooming and has no mature busi⁃ness standard as the reference, so that it is necessary to comb the unconventional business by the collection, population and con⁃struction of source to provide supporting of business of East China company. With the deepening of the application, the necessity of data integration and source arrangement combing is more apparent. This paper makes a brief introduction from the establishment of database to the database management and other aspects of the design in order to timely supervise and count the unconventional business data. And it provides better storage, management and application for large amounts of data produced from the process of the modern production, scientific research and management of East China company.%  随着中国石化华东分公司非常规业务的不断开展,油田的综合研究、生产运行、经营管理正向着更深、更精细化方向发展,对信息的需求也提出“完整统一、准确及时”的更高要求。华东分公司经过信息化建设,建立了部分专业数据库和数据应用系统,较好地支撑了油田管理研究和生产工作。但同常规油气田一样,涵盖的业务涉及了勘探开发的物探、录

  14. The NorWeST project: Crowd-sourcing a big data stream temperature database and high-resolution climate scenarios for western rivers and streams

    Science.gov (United States)

    Isaak, D.; Wenger, S. J.; Peterson, E.; Ver Hoef, J.; Luce, C.; Hostetler, S.

    2015-12-01

    Climate change is warming streams across the western U.S. and threatens billions of dollars of investments made to conserve valuable cold-water species like trout and salmon. Efficient threat response requires prioritization of limited conservation resources and coordinated interagency efforts guided by accurate information about climate at scales relevant to the distributions of species across landscapes. To provide that information, the NorWeST project was initiated in 2011 to aggregate stream temperature data from all available sources and create high-resolution climate scenarios. The database has since grown into the largest of its kind globally, and now consists of >60,000,000 hourly temperature recordings at >20,000 unique stream sites that were contributed by 100s of professionals working for >95 state, federal, tribal, municipal, county, and private resource agencies. This poster shows a high-resolution (1-kilometer) summer temperature scenario created with these data and mapped to 800,000 kilometers of network across eight western states (ID, WA, OR, MT, WY, UT, NV, CA). The geospatial data associated with this climate scenario and thirty others developed in this project are distributed in user-friendly digital formats through the NorWeST website (http://www.fs.fed.us/rm/boise/AWAE/projects/NorWeST.shtml). The accuracy, utility, and convenience of NorWeST data products has led to their rapid adoption and use by the management and research communities for conservation planning, inter-agency coordination of monitoring networks, and new research on stream temperatures and thermal ecology. A project of this scope and utility was possible only through crowd-sourcing techniques, which have also served to engage data contributors in the process of science creation while strengthening the social networks needed for effective conservation.

  15. Surgical research using national databases.

    Science.gov (United States)

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  16. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  17. Aging US males with multiple sources of emotional social support have low testosterone.

    Science.gov (United States)

    Gettler, Lee T; Oka, Rahul C

    2016-02-01

    Among species expressing bi-parental care, males' testosterone is often low when they cooperate with females to raise offspring. In humans, low testosterone men might have an advantage as nurturant partners and parents because they are less prone to anger and reactive aggression and are more empathetic. However, humans engage in cooperative, supportive relationships beyond the nuclear family, and these prosocial capacities were likely critical to our evolutionary success. Despite the diversity of human prosociality, no prior study has tested whether men's testosterone is also reduced when they participate in emotionally supportive relationships, beyond partnering and parenting. Here, we draw on testosterone and emotional social support data that were collected from older men (n=371; mean: 61.2years of age) enrolled in the National Health and Nutrition Examination Survey, a US nationally-representative study. Men who reported receiving emotional support from two or more sources had lower testosterone than men reporting zero support (all psocial relationships. Our results contribute novel insights on the intersections between health, social support, and physiology.

  18. IPeak: An open source tool to combine results from multiple MS/MS search engines.

    Science.gov (United States)

    Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun

    2015-09-01

    Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/.

  19. Integration of Multiple Genomic Data Sources in a Bayesian Cox Model for Variable Selection and Prediction.

    Science.gov (United States)

    Treppmann, Tabea; Ickstadt, Katja; Zucknick, Manuela

    2017-01-01

    Bayesian variable selection becomes more and more important in statistical analyses, in particular when performing variable selection in high dimensions. For survival time models and in the presence of genomic data, the state of the art is still quite unexploited. One of the more recent approaches suggests a Bayesian semiparametric proportional hazards model for right censored time-to-event data. We extend this model to directly include variable selection, based on a stochastic search procedure within a Markov chain Monte Carlo sampler for inference. This equips us with an intuitive and flexible approach and provides a way for integrating additional data sources and further extensions. We make use of the possibility of implementing parallel tempering to help improve the mixing of the Markov chains. In our examples, we use this Bayesian approach to integrate copy number variation data into a gene-expression-based survival prediction model. This is achieved by formulating an informed prior based on copy number variation. We perform a simulation study to investigate the model's behavior and prediction performance in different situations before applying it to a dataset of glioblastoma patients and evaluating the biological relevance of the findings.

  20. Sequence-based analysis of the microbial composition of water kefir from multiple sources.

    Science.gov (United States)

    Marsh, Alan J; O'Sullivan, Orla; Hill, Colin; Ross, R Paul; Cotter, Paul D

    2013-11-01

    Water kefir is a water-sucrose-based beverage, fermented by a symbiosis of bacteria and yeast to produce a final product that is lightly carbonated, acidic and that has a low alcohol percentage. The microorganisms present in water kefir are introduced via water kefir grains, which consist of a polysaccharide matrix in which the microorganisms are embedded. We aimed to provide a comprehensive sequencing-based analysis of the bacterial population of water kefir beverages and grains, while providing an initial insight into the corresponding fungal population. To facilitate this objective, four water kefirs were sourced from the UK, Canada and the United States. Culture-independent, high-throughput, sequencing-based analyses revealed that the bacterial fraction of each water kefir and grain was dominated by Zymomonas, an ethanol-producing bacterium, which has not previously been detected at such a scale. The other genera detected were representatives of the lactic acid bacteria and acetic acid bacteria. Our analysis of the fungal component established that it was comprised of the genera Dekkera, Hanseniaspora, Saccharomyces, Zygosaccharomyces, Torulaspora and Lachancea. This information will assist in the ultimate identification of the microorganisms responsible for the potentially health-promoting attributes of these beverages. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  1. Transfer learning based clinical concept extraction on data from multiple sources.

    Science.gov (United States)

    Lv, Xinbo; Guan, Yi; Deng, Benyang

    2014-12-01

    Machine learning methods usually assume that training data and test data are drawn from the same distribution. However, this assumption often cannot be satisfied in the task of clinical concept extraction. The main aim of this paper was to use training data from one institution to build a concept extraction model for data from another institution with a different distribution. An instance-based transfer learning method, TrAdaBoost, was applied in this work. To prevent the occurrence of a negative transfer phenomenon with TrAdaBoost, we integrated it with Bagging, which provides a "softer" weights update mechanism with only a tiny amount of training data from the target domain. Two data sets named BETH and PARTNERS from the 2010 i2b2/VA challenge as well as BETHBIO, a data set we constructed ourselves, were employed to show the effectiveness of our work's transfer ability. Our method outperforms the baseline model by 2.3% and 4.4% when the baseline model is trained by training data that are combined from the source domain and the target domain in two experiments of BETH vs. PARTNERS and BETHBIO vs. PARTNERS, respectively. Additionally, confidence intervals for the performance metrics suggest that our method's results have statistical significance. Moreover, we explore the applicability of our method for further experiments. With our method, only a tiny amount of labeled data from the target domain is required to build a concept extraction model that produces better performance.

  2. Delineation of Piceance Basin basement structures using multiple source data: Implications for fractured reservoir exploration

    Energy Technology Data Exchange (ETDEWEB)

    Hoak, T.E.; Klawitter, A.L.

    1995-10-01

    Fractured production trends in Piceance Basin Cretaceous-age Mesaverde Group gas reservoirs are controlled by subsurface structures. Because many of the subsurface structures are controlled by basement fault trends, a new interpretation of basement structure was performed using an integrated interpretation of Landsat Thematic Mapper (TM), side-looking airborne radar (SLAR), high altitude, false color aerial photography, gas and water production data, high-resolution aeromagnetic data, subsurface geologic information, and surficial fracture maps. This new interpretation demonstrates the importance of basement structures on the nucleation and development of overlying structures and associated natural fractures in the hydrocarbon-bearing section. Grand Valley, Parachute, Rulison, Plateau, Shire Gulch, White River Dome, Divide Creek and Wolf Creek fields all produce gas from fractured tight gas sand and coal reservoirs within the Mesaverde Group. Tectonic fracturing involving basement structures is responsible for development of permeability allowing economic production from the reservoirs. In this context, the significance of detecting natural fractures using the intergrated fracture detection technique is critical to developing tight gas resources. Integration of data from widely-available, relatively inexpensive sources such as high-resolution aeromagnetics, remote sensing imagery analysis and regional geologic syntheses provide diagnostic data sets to incorporate into an overall methodology for targeting fractured reservoirs. The ultimate application of this methodology is the development and calibration of a potent exploration tool to predict subsurface fractured reservoirs, and target areas for exploration drilling, and infill and step-out development programs.

  3. Construction of a single/multiple wavelength RZ optical pulse source at 40 GHz by use of wavelength conversion in a high-nonlinearity DSF-NOLM

    DEFF Research Database (Denmark)

    Yu, Jianjun; Yujun, Qian; Jeppesen, Palle;

    2001-01-01

    A single or multiple wavelength RZ optical pulse source at 40 GHz is successfully obtained by using wavelength conversion in a nonlinear optical loop mirror consisting of high nonlinearity-dispersion shifted fiber.......A single or multiple wavelength RZ optical pulse source at 40 GHz is successfully obtained by using wavelength conversion in a nonlinear optical loop mirror consisting of high nonlinearity-dispersion shifted fiber....

  4. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  5. Cr(Vi) reduction capacity of activated sludge as affected by nitrogen and carbon sources, microbial acclimation and cell multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Ferro Orozco, A.M., E-mail: mferro@cidca.org.ar [Centro de Investigacion y Desarrollo en Criotecnologia de Alimentos (CIDCA) CCT La Plata CONICET - Fac. de Cs. Exactas, UNLP. 47 y 116 (B1900AJJ) La Plata (Argentina); Contreras, E.M.; Zaritzky, N.E. [Centro de Investigacion y Desarrollo en Criotecnologia de Alimentos (CIDCA) CCT La Plata CONICET - Fac. de Cs. Exactas, UNLP. 47 y 116 (B1900AJJ) La Plata (Argentina); Fac. de Ingenieria, UNLP. 47 y 1 (B1900AJJ) - La Plata (Argentina)

    2010-04-15

    The objectives of the present work were: (i) to analyze the capacity of activated sludge to reduce hexavalent chromium using different carbon sources as electron donors in batch reactors, (ii) to determine the relationship between biomass growth and the amount of Cr(VI) reduced considering the effect of the nitrogen to carbon source ratio, and (iii) to determine the effect of the Cr(VI) acclimation stage on the performance of the biological chromium reduction assessing the stability of the Cr(VI) reduction capacity of the activated sludge. The highest specific Cr(VI) removal rate (q{sub Cr}) was attained with cheese whey or lactose as electron donors decreasing in the following order: cheese whey {approx} lactose > glucose > citrate > acetate. Batch assays with different nitrogen to carbon source ratio demonstrated that biological Cr(VI) reduction is associated to the cell multiplication phase; as a result, maximum Cr(VI) removal rates occur when there is no substrate limitation. The biomass can be acclimated to the presence of Cr(VI) and generate new cells that maintain the ability to reduce chromate. Therefore, the activated sludge process could be applied to a continuous Cr(VI) removal process.

  6. Exposure of children to polycyclic aromatic hydrocarbons in Mexico: assessment of multiple sources.

    Science.gov (United States)

    Martínez-Salinas, Rebeca I; Elena Leal, M; Batres-Esquivel, Lilia E; Domínguez-Cortinas, Gabriela; Calderón, Jacqueline; Díaz-Barriga, Fernando; Pérez-Maldonado, Iván N

    2010-08-01

    Biological monitoring of polycyclic aromatic hydrocarbons (PAHs) has expanded rapidly since urinary 1-hydroxypyrene (1-OHP) was suggested as a biological index for pyrene. Taking into account that pyrene is often present in PAHs mixtures, 1-OHP has also been considered an indirect indicator of exposure to these mixtures. Sources of PAHs in developing countries are numerous; however, exposure of children to PAHs has not been studied in detail. Therefore, the aim of this study was to assess exposure of children to PAHs in different scenarios: (a) children living next to highways with heavy traffic; (b) sanitary landfill; (c) brick kiln communities and (d) children exposed to biomass combustion. A total of 258 children (aged 3-13) participated in the study. The analyses were performed by HPLC with fluorescence detector. Urinary 1-OHP concentrations were then adjusted by urinary creatinine. The highest levels of 1-OHP in this study were found in children exposed to biomass combustion (mean value 3.25 micromol/mol creatinine), but exposure was also detected in children living in communities with brick kiln industry (mean 0.35 micromol/mol creatinine), or in a community next to a sanitary landfill (with waste combustion) (0.30 micromol/mol creatinine) and in children exposed to traffic (mean value 0.2 micromol/mol creatinine and 0.08 micromol/mol creatinine). Considering our results and taking into account that millions of children in Mexico are living in scenarios similar to those studied in this work, the assessment of health effects in children exposed to PAHs is urgently needed; furthermore, PAHs have to be declared contaminants of concern at a national level.

  7. Sources of Divergence in Remote Sensing of Vegetation Phenology From Multiple Long Term Satellite Data Records

    Science.gov (United States)

    Barreto, A.; Didan, K.; Miura, T.

    2008-12-01

    mosaic nature of these areas. Most other areas were very similar. We conclude that estimating consistent phenology from multiple sensors is possible provided the inter-sensor continuity, especially over dense tropical and boreal forests, is properly addressed. Furthermore, our results indicate that estimating phenology for cropped areas need to be addressed on a local to regional basis using finer resolution data that can properly account for their mosaic nature.

  8. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.

    Science.gov (United States)

    Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L

    2016-11-04

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the

  9. Screening the Medicines for Malaria Venture Pathogen Box across Multiple Pathogens Reclassifies Starting Points for Open-Source Drug Discovery.

    Science.gov (United States)

    Duffy, Sandra; Sykes, Melissa L; Jones, Amy J; Shelper, Todd B; Simpson, Moana; Lang, Rebecca; Poulsen, Sally-Ann; Sleebs, Brad E; Avery, Vicky M

    2017-09-01

    Open-access drug discovery provides a substantial resource for diseases primarily affecting the poor and disadvantaged. The open-access Pathogen Box collection is comprised of compounds with demonstrated biological activity against specific pathogenic organisms. The supply of this resource by the Medicines for Malaria Venture has the potential to provide new chemical starting points for a number of tropical and neglected diseases, through repurposing of these compounds for use in drug discovery campaigns for these additional pathogens. We tested the Pathogen Box against kinetoplastid parasites and malaria life cycle stages in vitro Consequently, chemical starting points for malaria, human African trypanosomiasis, Chagas disease, and leishmaniasis drug discovery efforts have been identified. Inclusive of this in vitro biological evaluation, outcomes from extensive literature reviews and database searches are provided. This information encompasses commercial availability, literature reference citations, other aliases and ChEMBL number with associated biological activity, where available. The release of this new data for the Pathogen Box collection into the public domain will aid the open-source model of drug discovery. Importantly, this will provide novel chemical starting points for drug discovery and target identification in tropical disease research. Copyright © 2017 Duffy et al.

  10. Acoustic multipole sources for the regularized lattice Boltzmann method: Comparison with multiple-relaxation-time models in the inviscid limit.

    Science.gov (United States)

    Zhuo, Congshan; Sagaut, Pierre

    2017-06-01

    In this paper, a variant of the acoustic multipole source (AMS) method is proposed within the framework of the lattice Boltzmann method. A quadrupole term is directly included in the stress system (equilibrium momentum flux), and the dependency of the quadrupole source in the inviscid limit upon the fortuitous discretization error reported in the works of E. M. Viggen [Phys. Rev. E 87, 023306 (2013)PLEEE81539-375510.1103/PhysRevE.87.023306] is removed. The regularized lattice Boltzmann (RLB) method with this variant AMS method is presented for the 2D and 3D acoustic problems in the inviscid limit, and without loss of generality, the D3Q19 model is considered in this work. To assess the accuracy and the advantage of the RLB scheme with this AMS for acoustic point sources, the numerical investigations and comparisons with the multiple-relaxation-time (MRT) models and the analytical solutions are performed on the 2D and 3D acoustic multipole point sources in the inviscid limit, including monopoles, x dipoles, and xx quadrupoles. From the present results, the good precision of this AMS method is validated, and the RLB scheme exhibits some superconvergence properties for the monopole sources compared with the MRT models, and both the RLB and MRT models have the same accuracy for the simulations of acoustic dipole and quadrupole sources. To further validate the capability of the RLB scheme with AMS, another basic acoustic problem, the acoustic scattering from a solid cylinder presented at the Second Computational Aeroacoustics Workshop on Benchmark Problems, is numerically considered. The directivity pattern of the acoustic field is computed at r=7.5; the present results agree well with the exact solutions. Also, the effects of slip and no-slip wall treatments within the regularized boundary condition on this pure acoustic scattering problem are tested, and compared with the exact solution, the slip wall treatment can present a better result. All simulations demonstrate

  11. English language learners with reading-related LD: linking data from multiple sources to make eligibility determinations.

    Science.gov (United States)

    Wilkinson, Cheryl Y; Ortiz, Alba A; Robertson, Phyllis M; Kushner, Millicent I

    2006-01-01

    Results are reported for an exploratory study of eligibility decisions made for 21 Spanish-speaking English language learners (ELLs) with learning disabilities (LD) and no secondary disabilities who received special education support in reading. Eligibility determinations by an expert panel resulted in decisions that differed significantly from those of school multidisciplinary teams. The panel agreed that some students appeared to have reading-related LD (n = 5) but also identified students that they believed had disabilities, but not necessarily reading-related LD (n = 6). Another group of students (n = 10) had learning problems that the panel believed could be attributed to factors other than LD or for whom substantive additional data would be required to validate eligibility. Issues associated with referral, assessment, and eligibility determinations for ELLs are discussed, and recommendations for improving practice are offered, with an emphasis on the importance of linking data from multiple sources when deciding whether ELLs qualify for special education.

  12. Reclamation research database

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    A reclamation research database was compiled to help stakeholders search publications and research related to the reclamation of Alberta's oil sands region. New publications are added to the database by the Cumulative Environmental Management Association (CEMA), a nonprofit association whose mandate is to develop frameworks and guidelines for the management of cumulative environmental effects in the oil sands region. A total of 514 research papers have been compiled in the database to date. Topics include recent research on hydrology, aquatic and terrestrial ecosystems, laboratory studies on biodegradation, and the effects of oil sands processing on micro-organisms. The database includes a wide variety of studies related to reconstructed wetlands as well as the ecological effects of hydrocarbons on phytoplankton and other organisms. The database format included information on research format availability, as well as information related to the author's affiliations. Links to external abstracts were provided where available, as well as details of source information.

  13. Influence Of Iron Sources In The Nutrient Medium On In Vitro Shoot Multiplication And Rooting Of Magnolia And Cherry Plum

    Directory of Open Access Journals (Sweden)

    Sokolov Rosen S.

    2015-12-01

    Full Text Available In this study, the effects of compounds providing Fe in chelated (NaFeEDTA and Fe(IIIAC and non-chelated (FeSO4·7H2O forms as components of culture media, on in vitro shoot multiplication and rooting of Magnolia soulangeana ‘Alexandrina’, Magnolia grandiflora and Prunus cerasifera ‘Nigra’ were comparatively evaluated. Each of the tested chemicals was used as a single Fe source in the basal salt medium. In the stages of shoot multiplication and rooting plant response was scored by biometrical indices (number of shoots, leaves and roots, shoot and root length, percent of rooted plants and root hairs. The occurrence of physiological disorders was estimated by visual observations. In presence of FeSO4, symptoms of chlorosis, hyperhy-dricity, early senescence and specific morphology of roots, suggesting Fe deficiency, were observed. These deteriorations were entirely prevented at the application of Fe chelates of which, in this experimental systems, Fe(IIIAC was tested for the first time. The addition of Fe(IIIAC positively affected the plant quality to extent comparable to that of NaFeEDTA. The obtained data suggest that both applied Fe chelates are more appropriate than non-chelated Fe form and can be alternatively used in the optimization of nutrient media for micropropagation of Magnolia and Prunus cerasifera genotypes.

  14. Multiple episodes in children and adolescents with bipolar disorder: comorbidity, hospitalization, and treatment (data from a cohort of 8,129 patients of a national managed care database).

    Science.gov (United States)

    Castilla-Puentes, Ruby

    2008-01-01

    The purpose of this study was to delineate the prevalence, demographic characteristics, comorbidity, hospitalization, and medication use of a large cohort of patients with and without multiple episodes per year. We hypothesized that children and adolescents with multiple episodes per year would have a higher comorbidity and require more hospitalizations and pharmacological treatment than their counterparts without multiple episodes. Analysis was conducted on a cohort of 8,129 children and adolescents patients (bipolar disorders (BD), from the Integrated Healthcare Information Services (IHCIS) identified from June 30, 2000 to July 1, 2003. Demographics variables, type of hospitalization, and psychotropic medication used in the year of follow-up were compared between children and adolescents with multiple and those without multiple episodes per year. Included were 58 patients with multiple episodes (defined as: > or = 4 or more reports of inpatient treatment for any affective disorders per year) and 8,071 without multiple episodes. Children and adolescents with multiple episodes versus those without multiple episodes were differentiated as follows: more comorbid attention deficit disorder (ADD) (80.9% versus 29.4%) (chi2 = 70.61, df = 1, p children and adolescents with multiple episodes per year present a higher comorbidity and require more hospitalizations and pharmacological treatment than those without multiple episodes. The diagnosis and treatment of children and adolescents with BD will have to take into account the high comorbidity of ADD mainly in children and adolescents with multiple episodes. Future prospective studies will help to better characterize the impact of multiple episodes in the course of pediatric BD and facilitate appropriate treatment strategies.

  15. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  16. Web-MCQ: a set of methods and freely available open source code for administering online multiple choice question assessments.

    Science.gov (United States)

    Hewson, Claire

    2007-08-01

    E-learning approaches have received increasing attention in recent years. Accordingly, a number of tools have become available to assist the nonexpert computer user in constructing and managing virtual learning environments, and implementing computer-based and/or online procedures to support pedagogy. Both commercial and free packages are now available, with new developments emerging periodically. Commercial products have the advantage of being comprehensive and reliable, but tend to require substantial financial investment and are not always transparent to use. They may also restrict pedagogical choices due to their predetermined ranges of functionality. With these issues in mind, several authors have argued for the pedagogical benefits of developing freely available, open source e-learning resources, which can be shared and further developed within a community of educational practitioners. The present paper supports this objective by presenting a set of methods, along with supporting freely available, downloadable, open source programming code, to allow administration of online multiple choice question assessments to students.

  17. Comparison of multiple viral population characterization methods on a candidate cross-protection Citrus tristeza virus (CTV) source.

    Science.gov (United States)

    Kleynhans, Jackie; Pietersen, Gerhard

    2016-11-01

    Citrus tristeza virus (CTV) is the most economically important virus found on citrus and influences production worldwide. The 3' half of the RNA genome is generally conserved amongst sources, whereas the 5' portion is more divergent, allowing for the classification of the virus into a number of genotypes based on sequence diversity. The acknowledged genotypes of CTV are continually being expanded, and thus far include T36, T30, T3, VT, B165, HA16-5, T68 and RB. The genotype composition of the CTV populations of a potential cross protection source in Mexican lime was studied whilst comparing different techniques of viral population characterization. Cloning and sequencing of an ORF 1a fragment, genotype specific RT-PCRs and Illumina sequencing of the p33 gene as well as RNA template enrichment through immuno-capture was done. Primers used in the cloning and sequencing proved to be biased towards detection of the VT genotype. RT-PCR and Illumina sequencing using the two different templates provided relatively comparable results, even though the immuno-captured enriched template provided less than expected CTV specific data, while the RT-PCRs and p33 sequencing cannot be used to make inferences about the rest of the genome; which may vary due to recombination. The source was found to contain multiple genotypes, including RB and VT. When choosing a characterization method, the features of the virus under study should be considered. It was found that Illumina sequencing offers an opportunity to gain a large amount of information regarding the entire viral genome, but challenges encountered are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Constructing a Geology Ontology Using a Relational Database

    Science.gov (United States)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  19. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  20. Development of the crop residue and rangeland burning in the 2014 National Emissions Inventory using information from multiple sources.

    Science.gov (United States)

    Pouliot, George; Rao, Venkatesh; McCarty, Jessica L; Soja, Amber

    2017-05-01

    Biomass burning has been identified as an important contributor to the degradation of air quality because of its impact on ozone and particulate matter. One component of the biomass burning inventory, crop residue burning, has been poorly characterized in the National Emissions Inventory (NEI). In the 2011 NEI, wildland fires, prescribed fires, and crop residue burning collectively were the largest source of PM2.5. This paper summarizes our 2014 NEI method to estimate crop residue burning emissions and grass/pasture burning emissions using remote sensing data and field information and literature-based, crop-specific emission factors. We focus on both the postharvest and pre-harvest burning that takes place with bluegrass, corn, cotton, rice, soybeans, sugarcane and wheat. Estimates for 2014 indicate that over the continental United States (CONUS), crop residue burning excluding all areas identified as Pasture/Grass, Grassland Herbaceous, and Pasture/Hay occurred over approximately 1.5 million acres of land and produced 19,600 short tons of PM2.5. For areas identified as Pasture/Grass, Grassland Herbaceous, and Pasture/Hay, biomass burning emissions occurred over approximately 1.6 million acres of land and produced 30,000 short tons of PM2.5. This estimate compares with the 2011 NEI and 2008 NEI as follows: 2008: 49,650 short tons and 2011: 141,180 short tons. Note that in the previous two NEIs rangeland burning was not well defined and so the comparison is not exact. The remote sensing data also provided verification of our existing diurnal profile for crop residue burning emissions used in chemical transport modeling. In addition, the entire database used to estimate this sector of emissions is available on EPA's Clearinghouse for Inventories and Emission Factors (CHIEF, http://www3.epa.gov/ttn/chief/index.html ).

  1. Genome databases

    Energy Technology Data Exchange (ETDEWEB)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  2. Multiple source genes of HAmo SINE actively expanded and ongoing retroposition in cyprinid genomes relying on its partner LINE

    Directory of Open Access Journals (Sweden)

    Gan Xiaoni

    2010-04-01

    Full Text Available Abstract Background We recently characterized HAmo SINE and its partner LINE in silver carp and bighead carp based on hybridization capture of repetitive elements from digested genomic DNA in solution using a bead-probe 1. To reveal the distribution and evolutionary history of SINEs and LINEs in cyprinid genomes, we performed a multi-species search for HAmo SINE and its partner LINE using the bead-probe capture and internal-primer-SINE polymerase chain reaction (PCR techniques. Results Sixty-seven full-size and 125 internal-SINE sequences (as well as 34 full-size and 9 internal sequences previously reported in bighead carp and silver carp from 17 species of the family Cyprinidae were aligned as well as 14 new isolated HAmoL2 sequences. Four subfamilies (type I, II, III and IV, which were divided based on diagnostic nucleotides in the tRNA-unrelated region, expanded preferentially within a certain lineage or within the whole family of Cyprinidae as multiple active source genes. The copy numbers of HAmo SINEs were estimated to vary from 104 to 106 in cyprinid genomes by quantitative RT-PCR. Over one hundred type IV members were identified and characterized in the primitive cyprinid Danio rerio genome but only tens of sequences were found to be similar with type I, II and III since the type IV was the oldest subfamily and its members dispersed in almost all investigated cyprinid fishes. For determining the taxonomic distribution of HAmo SINE, inter-primer SINE PCR was conducted in other non-cyprinid fishes, the results shows that HAmo SINE- related sequences may disperse in other families of order Cypriniforms but absent in other orders of bony fishes: Siluriformes, Polypteriformes, Lepidosteiformes, Acipenseriformes and Osteoglossiforms. Conclusions Depending on HAmo LINE2, multiple source genes (subfamilies of HAmo SINE actively expanded and underwent retroposition in a certain lineage or within the whole family of Cyprinidae. From this

  3. Underwater acoustic channel estimation using multiple sources and receivers in shallow waters at very-high frequencies

    Science.gov (United States)

    Kaddouri, Samar

    The underwater channel poses numerous challenges for acoustic communication. Acoustic waves suffer long propagation delay, multipath, fading, and potentially high spatial and temporal variability. In addition, there is no typical underwater acoustic channel; every body of water exhibits quantifiably different properties. Underwater acoustic modems are traditionally operated at low frequencies. However, the use of broadband, high frequency communication is a good alternative because of the lower background noise compared to low-frequencies, considerably larger bandwidth and better source transducer efficiency. One of the biggest problems in the underwater acoustic communications at high frequencies is time-selective fading, resulting in the Doppler spread. While many Doppler detection, estimation and compensation techniques can be found in literature, the applications are limited to systems operating at low frequencies contained within frequencies ranging from a few hundred Hertz to around 30 kHz. This dissertation proposes two robust channel estimation techniques for simultaneous transmissions using multiple sources and multiple receivers (MIMO) that closely follows the rapidly time-varying nature of the underwater channel. The first method is a trended least square (LS) estimation that combines the traditional LS method with an empirical modal decomposition (EMD) based trend extraction algorithm. This method allows separating the slow fading modes in the MIMO channels from the fast-fading ones and thus achieves a close tracking of the channel impulse response time fluctuations. This dissertation also outlines a time-varying underwater channel estimation method based on the channel sparsity characteristic. The sparsity of the underwater communication channel is exploited by using the MIMO P-iterative greedy orthogonal matching pursuit (MIMO-OMP) algorithm for the channel estimation. Both techniques are demonstrated in a fully controlled environment, using simulated

  4. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  5. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  6. Using volume holograms to search digital databases

    Science.gov (United States)

    Burr, Geoffrey W.; Maltezos, George; Grawert, Felix; Kobras, Sebastian; Hanssen, Holger; Coufal, Hans J.

    2002-01-01

    Holographic data storage offers the potential for simultaneous search of an entire database by performing multiple optical correlations between stored data pages and a search argument. This content-addressable retrieval produces one analog correlation score for each stored volume hologram. We have previously developed fuzzy encoding techniques for this fast parallel search, and holographically searched a small database with high fidelity. We recently showed that such systems can be configured to produce true inner-products, and proposed an architecture in which massively-parallel searches could be implemented. However, the speed advantage over conventional electronic search provided by parallelism brings with it the possibility of erroneous search results, since these analog correlation scores are subject to various noise sources. We show that the fidelity of such an optical search depends not only on the usual holographic storage signal-to-noise factors (such as readout power, diffraction efficiency, and readout speed), but also on the particular database query being made. In effect, the presence of non-matching database records with nearly the same correlation score as the targeted matching records reduces the speed advantage of the parallel search. Thus for any given fidelity target, the performance improvement offered by a content-addressable holographic storage can vary from query to query even within the same database.

  7. FishTraits Database

    Science.gov (United States)

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  8. The CARLSBAD database: a confederated database of chemical bioactivities.

    Science.gov (United States)

    Mathias, Stephen L; Hines-Kay, Jarrett; Yang, Jeremy J; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G; Ursu, Oleg; Oprea, Tudor I

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical-protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical-protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure-target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure-target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. 'drugs'). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD database

  9. Federated Spatial Databases and Interoperability

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given

  10. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  11. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  12. Sources

    OpenAIRE

    2015-01-01

    SOURCES MANUSCRITES Archives nationales Rôles de taille 1768/71 Z1G-344/18 Aulnay Z1G-343a/02 Gennevilliers Z1G-340/01 Ivry Z1G-340/05 Orly Z1G-334c/09 Saint-Remy-lès-Chevreuse Z1G-344/18 Sevran Z1G-340/05 Thiais 1779/80 Z1G-391a/18 Aulnay Z1G-380/02 Gennevilliers Z1G-385/01 Ivry Z1G-387b/05 Orly Z1G-388a/09 Saint-Remy-lès-Chevreuse Z1G-391a/18 Sevran Z1G-387b/05 Thiais 1788/89 Z1G-451/18 Aulnay Z1G-452/21 Chennevières Z1G-443b/02 Gennevilliers Z1G-440a/01 Ivry Z1G-452/17 Noiseau Z1G-445b/05 ...

  13. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  14. Slow invasion of a fluid from multiple inlet sources in a thin porous layer: influence of trapping and wettability.

    Science.gov (United States)

    Ceballos, L; Prat, M

    2013-04-01

    We study numerically the process of quasistatic invasion of a fluid in thin porous layers from multiple inlet injection sources focusing on the effect of trapping or mixed wettability, that is, when hydrophobic and hydrophilic pores coexist in the system. Two flow scenarios are considered. In the first one, referred to as the sequential scenario, the injection bonds at the inlet are activated one after the other. In the second one, referred to as the kinetic scenario, the injection bonds at the inlet are activated simultaneously. In contrast with the case of purely hydrophobic systems with no trapping, studied in a previous work, it is shown that the invasion pattern and the breakthrough point statistics at the end of the displacement depend on the flow scenario when trapping or mixed wettability effects are taken into account. The transport properties of the defending phase are also studied and it is shown that a one-to-one relationship between the overall diffusive conductance and the mean saturation cannot be expected in a thin system. In contrast with thick systems, the diffusive conductance also depends on the thickness when the system is thin. After consideration of various generic aspects characterizing thin porous systems, the main results are briefly discussed in relation with the water management problem in proton exchange membrane fuel cells.

  15. Production and application of a novel bioflocculant by multiple-microorganism consortia using brewery wastewater as carbon source

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhi-qiang; LIN Bo; XIA Si-qing; WANG Xue-jiang; YANG A-ming

    2007-01-01

    The flocculating activity of a novel bioflocculant MMF1 produced by multiple-microorganism consortia MM1 was investigated. MM1 was composed of strain BAFRT4 identified as Staphylococcus sp. and strain CYGS1 identified as Pseudomonas sp. The flocculating activity of MMF1 isolated from the screening medium was 82.9%, which is remarkably higher than that of the bioflocculant produced by either of the strains under the same condition. Brewery wastewater was also used as the carbon source for MM1, and the cost-effective production medium for MM1 mainly comprised 1.0 L brewery water (chemical oxygen demand (COD) 5000 mg/L), 0.5 g/L urea, 0.5 g/L yeast extract, and 0.2 g/L (NH4)2SO4. The optimal conditions for the production of MMF1 was inoculum size 2%, initial pH 6.0, cultivating temperature 30℃, and shaking speed 160 r/min, under which the flocculating activity of the MMF1 reached 96.8%. Fifteen grams of purified bioflocculant could be recovered from 1.0 L of fermentation broth. MMF1 was identified as a macromolecular substance containing both protein and polysaccharide. It showed good flocculating performance in treating indigotin printing and dyeing wastewater, and the maximal removal efficiencies of COD and chroma were 79.2% and 86.5%, respectively.

  16. Discovering perturbation of modular structure in HIV progression by integrating multiple data sources through non-negative matrix factorization.

    Science.gov (United States)

    Ray, Sumanta; Maulik, Ujjwal

    2016-12-20

    Detecting perturbation in modular structure during HIV-1 disease progression is an important step to understand stage specific infection pattern of HIV-1 virus in human cell. In this article, we proposed a novel methodology on integration of multiple biological information to identify such disruption in human gene module during different stages of HIV-1 infection. We integrate three different biological information: gene expression information, protein-protein interaction information and gene ontology information in single gene meta-module, through non negative matrix factorization (NMF). As the identified metamodules inherit those information so, detecting perturbation of these, reflects the changes in expression pattern, in PPI structure and in functional similarity of genes during the infection progression. To integrate modules of different data sources into strong meta-modules, NMF based clustering is utilized here. Perturbation in meta-modular structure is identified by investigating the topological and intramodular properties and putting rank to those meta-modules using a rank aggregation algorithm. We have also analyzed the preservation structure of significant GO terms in which the human proteins of the meta-modules participate. Moreover, we have performed an analysis to show the change of coregulation pattern of identified transcription factors (TFs) over the HIV progression stages.

  17. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    Science.gov (United States)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  18. Household trends in access to improved water sources and sanitation facilities in Vietnam and associated factors: findings from the Multiple Indicator Cluster Surveys, 2000–2011

    OpenAIRE

    Tuyet-Hanh, Tran Thi; Lee, Jong-Koo; Oh, Juhwan; Van Minh, Hoang; Lee, Chul Ou; Hoan, Le Thi; Nam, You-Seon; Long, Tran Khanh

    2016-01-01

    Background: Despite progress made by the Millennium Development Goal (MDG) number 7.C, Vietnam still faces challenges with regard to the provision of access to safe drinking water and basic sanitation.Objective: This paper describes household trends in access to improved water sources and sanitation facilities separately, and analyses factors associated with access to improved water sources and sanitation facilities in combination.Design: Secondary data from the Vietnam Multiple Indicator Clu...

  19. Database for West Africa

    African Journals Online (AJOL)

    Such database can prove an invaluable source of information for a wide range of agricultural and ... national soil classification systems around the world ... West African Journal of Appl ied Ecology, vol. .... SDB FAO-ISRIC English, French, Spanish Morphology and analytical ..... Furthermore, it will enhance the state of soil.

  20. Effectiveness of Partition and Graph Theoretic Clustering Algorithms for Multiple Source Partial Discharge Pattern Classification Using Probabilistic Neural Network and Its Adaptive Version: A Critique Based on Experimental Studies

    Directory of Open Access Journals (Sweden)

    S. Venkatesh

    2012-01-01

    Full Text Available Partial discharge (PD is a major cause of failure of power apparatus and hence its measurement and analysis have emerged as a vital field in assessing the condition of the insulation system. Several efforts have been undertaken by researchers to classify PD pulses utilizing artificial intelligence techniques. Recently, the focus has shifted to the identification of multiple sources of PD since it is often encountered in real-time measurements. Studies have indicated that classification of multi-source PD becomes difficult with the degree of overlap and that several techniques such as mixed Weibull functions, neural networks, and wavelet transformation have been attempted with limited success. Since digital PD acquisition systems record data for a substantial period, the database becomes large, posing considerable difficulties during classification. This research work aims firstly at analyzing aspects concerning classification capability during the discrimination of multisource PD patterns. Secondly, it attempts at extending the previous work of the authors in utilizing the novel approach of probabilistic neural network versions for classifying moderate sets of PD sources to that of large sets. The third focus is on comparing the ability of partition-based algorithms, namely, the labelled (learning vector quantization and unlabelled (K-means versions, with that of a novel hypergraph-based clustering method in providing parsimonious sets of centers during classification.

  1. Text-Based Argumentation with Multiple Sources: A Descriptive Study of Opportunity to Learn in Secondary English Language Arts, History, and Science

    Science.gov (United States)

    Litman, Cindy; Marple, Stacy; Greenleaf, Cynthia; Charney-Sirott, Irisa; Bolz, Michael J.; Richardson, Lisa K.; Hall, Allison H.; George, MariAnne; Goldman, Susan R.

    2017-01-01

    This study presents a descriptive analysis of 71 videotaped lessons taught by 34 highly regarded secondary English language arts, history, and science teachers, collected to inform an intervention focused on evidence-based argumentation from multiple text sources. Studying the practices of highly regarded teachers is valuable for identifying…

  2. Students Working with Multiple Conflicting Documents on a Scientific Issue: Relations between Epistemic Cognition while Reading and Sourcing and Argumentation in Essays

    Science.gov (United States)

    Bråten, Ivar; Ferguson, Leila E.; Strømsø, Helge I.; Anmarkrud, Øistein

    2014-01-01

    Background: There is burgeoning research within educational psychology on both epistemic cognition and multiple-documents literacy, as well as on relationships between the two constructs. Aim: To examine relationships between epistemic cognition concerning the justification of knowledge claims and sourcing and argumentation skills. Sample:…

  3. Text-Based Argumentation with Multiple Sources: A Descriptive Study of Opportunity to Learn in Secondary English Language Arts, History, and Science

    Science.gov (United States)

    Litman, Cindy; Marple, Stacy; Greenleaf, Cynthia; Charney-Sirott, Irisa; Bolz, Michael J.; Richardson, Lisa K.; Hall, Allison H.; George, MariAnne; Goldman, Susan R.

    2017-01-01

    This study presents a descriptive analysis of 71 videotaped lessons taught by 34 highly regarded secondary English language arts, history, and science teachers, collected to inform an intervention focused on evidence-based argumentation from multiple text sources. Studying the practices of highly regarded teachers is valuable for identifying…

  4. References for Galaxy Clusters Database

    OpenAIRE

    Kalinkov, M.; Valtchanov, I.; Kuneva, I.

    1998-01-01

    A bibliographic database will be constructed with the purpose to be a general tool for searching references for galaxy clusters. The structure of the database will be completely different from the available now databases as NED, SIMBAD, LEDA. Search based on hierarchical keyword system will be performed through web interfaces from numerous bibliographic sources -- journal articles, preprints, unpublished results and papers, theses, scientific reports. Data from the very beginning of the extra...

  5. The Hidden Health and Economic Burden of Rotavirus Gastroenteritis in Malaysia: An Estimation Using Multiple Data Sources.

    Science.gov (United States)

    Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Jit, Mark

    2016-06-01

    Rotavirus gastroenteritis (RVGE) results in substantial mortality and morbidity worldwide. However, an accurate estimation of the health and economic burden of RVGE in Malaysia covering public, private and home treatment is lacking. Data from multiple sources were used to estimate diarrheal mortality and morbidity according to health service utilization. The proportion of this burden attributable to rotavirus was estimated from a community-based study and a meta-analysis we conducted of primary hospital-based studies. Rotavirus incidence was determined by multiplying acute gastroenteritis incidence with estimates of the proportion of gastroenteritis attributable to rotavirus. The economic burden of rotavirus disease was estimated from the health systems and societal perspective. Annually, rotavirus results in 27 deaths, 31,000 hospitalizations, 41,000 outpatient visits and 145,000 episodes of home-treated gastroenteritis in Malaysia. We estimate an annual rotavirus incidence of 1 death per 100,000 children and 12 hospitalizations, 16 outpatient clinic visits and 57 home-treated episodes per 1000 children under-5 years. Annually, RVGE is estimated to cost US$ 34 million to the healthcare provider and US$ 50 million to society. Productivity loss contributes almost a third of costs to society. Publicly, privately and home-treated episodes consist of 52%, 27% and 21%, respectively, of the total societal costs. RVGE represents a considerable health and economic burden in Malaysia. Much of the burden lies in privately or home-treated episodes and is poorly captured in previous studies. This study provides vital information for future evaluation of cost-effectiveness, which are necessary for policy-making regarding universal vaccination.

  6. Contextualized perceptions of movement as a source of expanded insight: People with multiple sclerosis' experience with physiotherapy.

    Science.gov (United States)

    Normann, Britt; Sørgaard, Knut W; Salvesen, Rolf; Moe, Siri

    2013-01-01

    The hospitals' outpatient clinics for people with multiple sclerosis (PwMS) are important in the health care. Research regarding physiotherapy in such clinics is limited. The purpose was to investigate how PwMS perceive movement during single sessions of physiotherapy in a hospital's outpatient clinic, and what do these experiences mean for the patient's insight into their movement disturbances? Qualitative research interviews were performed with a purposive sample of 12 PwMS and supplemented with seven videotaped sessions. Content analysis was performed. The results indicate that contextualized perceptions of movement appear to be an essential source for PwMS to gain expanded insight with regard to their individual movement disturbances regardless of their ambulatory status. The contextualization implies that perceptions of movement are integrated with the physiotherapist's explanations regarding optimizing gait and balance or other activities of daily life. Perceptions of improvement in body part movement and/or functional activities are vital to enhancing their understanding of their individual movement disorders, and they may provide expanded insight regarding future possibilities and limitations involving everyday tasks. The implementation of movements, which transforms the perceived improvement into self-assisted exercises, appeared to be meaningful. Contextualized perceptions of improvements in movement may strengthen the person's sense of ownership and sense of agency and thus promote autonomy and self-encouragement. The findings underpin the importance of contextualized perceptions of movement based on exploration of potential for change, as an integrated part of information and communication in the health care for PwMS. Further investigations are necessary to deepen our knowledge.

  7. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  8. Quantifying methane emission from fugitive sources by combining tracer release and downwind measurements – A sensitivity analysis based on multiple field surveys

    DEFF Research Database (Denmark)

    Mønster, Jacob; Samuelsson, Jerker; Kjeldsen, Peter

    2014-01-01

    Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with different...... trace gas configurations, (2) landfill with unknown emission source locations, (3) landfill with closely located emission sources, and (4) comparing with an Fourier transform infrared spectroscopy (FTIR) instrument using multiple trace gasses for source separation. The new real-time, high precision...... instrument can measure methane plumes more than 1.2km away from small sources (about 5kgh−1) in urban areas with a measurement frequency allowing plume crossing at normal driving speed. The method can be used for quantification of total methane emissions from diffuse area sources down to 1kg per hour and can...

  9. Core Data of Yeast Interacting Proteins Database (Annotation Updated Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available nteractions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hogan, J....erse direction. *1 The yeast proteome database (YPD) and Caenorhabditis elegans proteome database (WormPD): comprehensive resources

  10. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content...

  11. Creating the green's response to a virtual source inside a medium using reflection data with internal multiples

    NARCIS (Netherlands)

    Broggini, F.; Snieder, R.; Wapenaar, C.P.A.; Thorbecke, J.W.

    2013-01-01

    Seismic interferometry is a technique that allows one to reconstruct the full wavefield originating from a virtual source inside a medium, assuming a receiver is present at the virtual source location. We discuss a method that creates a virtual source inside a medium from reflection data measured at

  12. Database of Interacting Proteins (DIP)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent...

  13. A New On-Land Seismogenic Structure Source Database from the Taiwan Earthquake Model (TEM Project for Seismic Hazard Analysis of Taiwan

    Directory of Open Access Journals (Sweden)

    J. Bruce H. Shyu

    2016-09-01

    Full Text Available Taiwan is located at an active plate boundary and prone to earthquake hazards. To evaluate the island’s seismic risk, the Taiwan Earthquake Model (TEM project, supported by the Ministry of Sciences and Technology, evaluates earthquake hazard, risk, and related social and economic impact models for Taiwan through multidisciplinary collaboration. One of the major tasks of TEM is to construct a complete and updated seismogenic structure database for Taiwan to assess future seismic hazards. Toward this end, we have combined information from pre-existing databases and data obtained from new analyses to build an updated and digitized three-dimensional seismogenic structure map for Taiwan. Thirty-eight on-land active seismogenic structures are identified. For detailed information of individual structures such as their long-term slip rates and potential recurrence intervals, we collected data from existing publications, as well as calculated from results of our own field surveys and investigations. We hope this updated database would become a significant constraint for seismic hazard assessment calculations in Taiwan, and would provide important information for engineers and hazard mitigation agencies.

  14. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  15. Globin gene server: a prototype E-mail database server featuring extensive multiple alignments and data compilation for electronic genetic analysis.

    Science.gov (United States)

    Hardison, R; Chao, K M; Schwartz, S; Stojanovic, N; Ganetsky, M; Miller, W

    1994-05-15

    The sequence of virtually the entire cluster of beta-like globin genes has been determined from several mammals, and many regulatory regions have been analyzed by mutagenesis, functional assays, and nuclear protein binding studies. This very large amount of sequence and functional data needs to be compiled in a readily accessible and usable manner to optimize data analysis, hypothesis testing, and model building. We report a Globin Gene Server that will provide this service in a constantly updated manner when fully implemented. The Server has two principal functions. The first (currently available) provides an annotated multiple alignment of the DNA sequences throughout the gene cluster from representatives of all species analyzed. The second compiles data on functional and protein binding assays throughout the gene cluster. A prototype of this compilation using the aligned 5' flanking region of beta-globin genes from five species shows examples of (1) well-conserved regions that have demonstrated functions, including cases in which the functional data are in apparent conflict, (2) proposed functional regions that are not well conserved, and (3) conserved regions with no currently assigned function. Such an electronic genetic analysis leads to many readily testable hypotheses that were not immediately apparent without the multiple alignment and compilation. The Server is accessible via E-mail on computer networks, and printed results can be obtained by request to the authors. This prototype will be a helpful guide for developing similar tools for many genomic loci.

  16. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  17. Mouse genome database 2016.

    Science.gov (United States)

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  18. Semi-quantitative evaluation of fecal contamination potential by human and ruminant sources using multiple lines of evidence

    Science.gov (United States)

    Stoeckel, D.M.; Stelzer, E.A.; Stogner, R.W.; Mau, D.P.

    2011-01-01

    Protocols for microbial source tracking of fecal contamination generally are able to identify when a source of contamination is present, but thus far have been unable to evaluate what portion of fecal-indicator bacteria (FIB) came from various sources. A mathematical approach to estimate relative amounts of FIB, such as Escherichia coli, from various sources based on the concentration and distribution of microbial source tracking markers in feces was developed. The approach was tested using dilute fecal suspensions, then applied as part of an analytical suite to a contaminated headwater stream in the Rocky Mountains (Upper Fountain Creek, Colorado). In one single-source fecal suspension, a source that was not present could not be excluded because of incomplete marker specificity; however, human and ruminant sources were detected whenever they were present. In the mixed-feces suspension (pet and human), the minority contributor (human) was detected at a concentration low enough to preclude human contamination as the dominant source of E. coli to the sample. Without the semi-quantitative approach described, simple detects of human-associated marker in stream samples would have provided inaccurate evidence that human contamination was a major source of E. coli to the stream. In samples from Upper Fountain Creek the pattern of E. coli, general and host-associated microbial source tracking markers, nutrients, and wastewater-associated chemical detections-augmented with local observations and land-use patterns-indicated that, contrary to expectations, birds rather than humans or ruminants were the predominant source of fecal contamination to Upper Fountain Creek. This new approach to E. coli allocation, validated by a controlled study and tested by application in a relatively simple setting, represents a widely applicable step forward in the field of microbial source tracking of fecal contamination. ?? 2011 Elsevier Ltd.

  19. The nature of caregiving in children of a parent with multiple sclerosis from multiple sources and the associations between caregiving activities and youth adjustment overtime.

    Science.gov (United States)

    Pakenham, Kenneth I; Cox, Stephen

    2012-01-01

    This study explored youth caregiving for a parent with multiple sclerosis (MS) from multiple perspectives, and examined associations between caregiving and child negative (behavioural emotional difficulties, somatisation) and positive (life satisfaction, positive affect, prosocial behaviour) adjustment outcomes overtime. A total of 88 families participated; 85 parents with MS, 55 partners and 130 children completed questionnaires at Time 1. Child caregiving was assessed by the Youth Activities of Caregiving Scale (YACS). Child and parent questionnaire data were collected at Time 1 and child data were collected 12 months later (Time 2). Factor analysis of the child and parent YACS data replicated the four factors (instrumental, social-emotional, personal-intimate, domestic-household care), all of which were psychometrically sound. The YACS factors were related to parental illness and caregiving context variables that reflected increased caregiving demands. The Time 1 instrumental and social-emotional care domains were associated with poorer Time 2 adjustment, whereas personal-intimate was related to better adjustment and domestic-household care was unrelated to adjustment. Children and their parents exhibited highest agreement on personal-intimate, instrumental and total caregiving, and least on domestic-household and social-emotional care. Findings delineate the key dimensions of young caregiving in MS and the differential links between caregiving activities and youth adjustment.

  20. Advances in knowledge discovery in databases

    CERN Document Server

    Adhikari, Animesh

    2015-01-01

    This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.  

  1. Students' Consideration of Source Information during the Reading of Multiple Texts and Its Effect on Intertextual Conflict Resolution

    Science.gov (United States)

    Kobayashi, Keiichi

    2014-01-01

    This study investigated students' spontaneous use of source information for the resolution of conflicts between texts. One-hundred fifty-four undergraduate students read two conflicting explanations concerning the relationship between blood type and personality under two conditions: either one explanation with a higher credibility source and…

  2. Students' Consideration of Source Information during the Reading of Multiple Texts and Its Effect on Intertextual Conflict Resolution

    Science.gov (United States)

    Kobayashi, Keiichi

    2014-01-01

    This study investigated students' spontaneous use of source information for the resolution of conflicts between texts. One-hundred fifty-four undergraduate students read two conflicting explanations concerning the relationship between blood type and personality under two conditions: either one explanation with a higher credibility source and…

  3. Children's Ability to Distinguish between Memories from Multiple Sources: Implications for the Quality and Accuracy of Eyewitness Statements.

    Science.gov (United States)

    Roberts, Kim P.

    2002-01-01

    Outlines five perspectives addressing alternate aspects of the development of children's source monitoring: source-monitoring theory, fuzzy-trace theory, schema theory, person-based perspective, and mental-state reasoning model. Discusses research areas with relation to forensic developmental psychology: agent identity, prospective processing,…

  4. A Combined Multiple-SLED Broadband Light Source at 1300 nm for High Resolution Optical Coherence Tomography.

    Science.gov (United States)

    Wang, Hui; Jenkins, Michael W; Rollins, Andrew M

    2008-04-01

    We demonstrate a compact, inexpensive, and reliable fiber-coupled light source with broad bandwidth and sufficient power at 1300 nm for high resolution optical coherence tomography (OCT) imaging in real-time applications. By combining four superluminescent diodes (SLEDs) with different central wavelengths, the light source has a bandwidth of 145 nm centered at 1325 nm with over 10 mW of power. OCT images of an excised stage 30 embryonic chick heart acquired with our combined SLED light source (<5 μm axial resolution in tissue) are compared with images obtained with a single SLED source (~10 μm axial resolution in tissue). The high resolution OCT system with the combined SLED light source provides better image quality (smaller speckle noise) and a greater ability to observe fine structures in the embryonic heart.

  5. A Combined Multiple-SLED Broadband Light Source at 1300 nm for High Resolution Optical Coherence Tomography

    Science.gov (United States)

    Wang, Hui; Jenkins, Michael W.; Rollins, Andrew M.

    2013-01-01

    We demonstrate a compact, inexpensive, and reliable fiber–coupled light source with broad bandwidth and sufficient power at 1300 nm for high resolution optical coherence tomography (OCT) imaging in real-time applications. By combining four superluminescent diodes (SLEDs) with different central wavelengths, the light source has a bandwidth of 145 nm centered at 1325 nm with over 10 mW of power. OCT images of an excised stage 30 embryonic chick heart acquired with our combined SLED light source (<5 μm axial resolution in tissue) are compared with images obtained with a single SLED source (~10 μm axial resolution in tissue). The high resolution OCT system with the combined SLED light source provides better image quality (smaller speckle noise) and a greater ability to observe fine structures in the embryonic heart. PMID:24347689

  6. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. C. (John C.); Baillet, S. (Sylvain); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the procedure is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.

  7. 基于复合热源的热泵型空调器%Heat pump air conditioner based on multiple heat sources

    Institute of Scientific and Technical Information of China (English)

    吴国珊; 凌勋

    2012-01-01

    It is proposed that the air-water multiple heat sources could be the heat source of heat pump air conditioner. Based on the current study condition, the heat pump air conditioner which has a air/family waste water multiple heat source is preliminary designed. The working cycle and characteristics of the air conditioner are analyzed by using the thermodynamic principle. The results show that the refrigeration performance of the heat pump air conditioner is better than that of air source heat pump air conditioner, the heating performance and the situation which the outdoor heat exchanger frosts are improved.%提出将空气-水作为热泵型空调器的复合热源.根据当前的研究状况,初步设计空气-水复合热源热泵型空调器,利用热力学原理分析该空调器的工作循环和特点,结果表明该空调器的制冷性能高于空气源热泵空调器,制热和室外换热器结霜状况得到一定改善.

  8. Ionizing radiation sources: very diversified means, multiple applications and a changing regulatory environment. Conference proceedings; Les sources de rayonnements ionisants: des moyens tres diversifies, des applications multiples et une reglementation en evolution. Recueil des presentations

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-11-15

    This document brings together the available presentations given at the conference organised by the French society of radiation protection about ionizing radiation source means, applications and regulatory environment. Twenty eight presentations (slides) are compiled in this document and deal with: 1 - Overview of sources - some quantitative data from the national inventory of ionizing radiation sources (Yann Billarand, IRSN); 2 - Overview of sources (Jerome Fradin, ASN); 3 - Regulatory framework (Sylvie Rodde, ASN); 4 - Alternatives to Iridium radiography - the case of pressure devices at the manufacturing stage (Henri Walaszek, Cetim; Bruno Kowalski, Welding Institute); 5 - Dosimetric stakes of medical scanner examinations (Jean-Louis Greffe, Charleroi hospital of Medical University); 6 - The removal of ionic smoke detectors (Bruno Charpentier, ASN); 7 - Joint-activity and reciprocal liabilities - Organisation of labour risk prevention in case of companies joint-activity (Paulo Pinto, DGT); 8 - Consideration of gamma-graphic testing in the organization of a unit outage activities (Jean-Gabriel Leonard, EDF); 9 - Radiological risk control at a closed and independent work field (Stephane Sartelet, Areva); 10 - Incidents and accidents status and typology (Pascale Scanff, IRSN); 11 - Regional overview of radiation protection significant events (Philippe Menechal, ASN); 12 - Incident leading to a tritium contamination in and urban area - consequences and experience feedback (Laurence Fusil, CEA); 13 - Experience feedback - loss of sealing of a calibration source (Philippe Mougnard, Areva); 14 - Blocking incident of a {sup 60}Co source (Bruno Delille, Salvarem); 15 - Triggering of gantry's alarm: status of findings (Philippe Prat, Syctom); 16 - Non-medical electric devices: regulatory changes (Sophie Dagois, IRSN; Jerome Fradin, ASN); 17 - Evaluation of the dose equivalent rate in pulsed fields: method proposed by the IRSN and implementation test (Laurent Donadille

  9. Epidemiologic study of neural tube defects in Los Angeles County. I. Prevalence at birth based on multiple sources of case ascertainment

    Energy Technology Data Exchange (ETDEWEB)

    Sever, L.E. (Pacific Northwest Lab., Richland, WA); Sanders, M.; Monsen, R.

    1982-01-01

    Epidemiologic studies of the neural tube defects (NTDs), anencephalus and spina bifida, have for the most part been based on single sources of case ascertainment in past studies. The present investigation attempts total ascertainment of NTD cases in the newborn population of Los Angeles County residents for the period 1966 to 1972. Design of the study, sources of data, and estimates of prevalence rates based on single and multiple sources of case ascertainment are here discussed. Anencephalus cases totaled 448, spina bifida 442, and encephalocele 72, giving prevalence rates of 0.52, 0.51, and 0.08 per 1000 total births, respectively, for these neural tube defects - rates considered to be low. The Los Angeles County prevalence rates are compared with those of other recent North American studies and support is provided for earlier suggestions of low rates on the West Coast.

  10. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  11. Bisphosphonate adverse effects, lessons from large databases

    DEFF Research Database (Denmark)

    Abrahamsen, Bo

    2010-01-01

    PURPOSE OF REVIEW: To review the latest findings on bisphosphonate safety from health databases, in particular sources that can provide incidence rates for stress fractures, osteonecrosis of the jaw (ONJ), atrial fibrillation and gastrointestinal lesions including esophageal cancer. The main focus...... health databases. However, database studies have limited specificity and sensitivity for atypical fractures and ONJ. Clinical case control studies are recommended....

  12. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  13. Having a lot of a good thing: multiple important group memberships as a source of self-esteem.

    Directory of Open Access Journals (Sweden)

    Jolanda Jetten

    Full Text Available Membership in important social groups can promote a positive identity. We propose and test an identity resource model in which personal self-esteem is boosted by membership in additional important social groups. Belonging to multiple important group memberships predicts personal self-esteem in children (Study 1a, older adults (Study 1b, and former residents of a homeless shelter (Study 1c. Study 2 shows that the effects of multiple important group memberships on personal self-esteem are not reducible to number of interpersonal ties. Studies 3a and 3b provide longitudinal evidence that multiple important group memberships predict personal self-esteem over time. Studies 4 and 5 show that collective self-esteem mediates this effect, suggesting that membership in multiple important groups boosts personal self-esteem because people take pride in, and derive meaning from, important group memberships. Discussion focuses on when and why important group memberships act as a social resource that fuels personal self-esteem.

  14. Having a Lot of a Good Thing: Multiple Important Group Memberships as a Source of Self-Esteem

    Science.gov (United States)

    Jetten, Jolanda; Branscombe, Nyla R.; Haslam, S. Alexander; Haslam, Catherine; Cruwys, Tegan; Jones, Janelle M.; Cui, Lijuan; Dingle, Genevieve; Liu, James; Murphy, Sean; Thai, Anh; Walter, Zoe; Zhang, Airong

    2015-01-01

    Membership in important social groups can promote a positive identity. We propose and test an identity resource model in which personal self-esteem is boosted by membership in additional important social groups. Belonging to multiple important group memberships predicts personal self-esteem in children (Study 1a), older adults (Study 1b), and former residents of a homeless shelter (Study 1c). Study 2 shows that the effects of multiple important group memberships on personal self-esteem are not reducible to number of interpersonal ties. Studies 3a and 3b provide longitudinal evidence that multiple important group memberships predict personal self-esteem over time. Studies 4 and 5 show that collective self-esteem mediates this effect, suggesting that membership in multiple important groups boosts personal self-esteem because people take pride in, and derive meaning from, important group memberships. Discussion focuses on when and why important group memberships act as a social resource that fuels personal self-esteem. PMID:26017554

  15. Online genetic databases informing human genome epidemiology

    Directory of Open Access Journals (Sweden)

    Higgins Julian PT

    2007-07-01

    Full Text Available Abstract Background With the advent of high throughput genotyping technology and the information available via projects such as the human genome sequencing and the HapMap project, more and more data relevant to the study of genetics and disease risk will be produced. Systematic reviews and meta-analyses of human genome epidemiology studies rely on the ability to identify relevant studies and to obtain suitable data from these studies. A first port of call for most such reviews is a search of MEDLINE. We examined whether this could be usefully supplemented by identifying databases on the World Wide Web that contain genetic epidemiological information. Methods We conducted a systematic search for online databases containing genetic epidemiological information on gene prevalence or gene-disease association. In those containing information on genetic association studies, we examined what additional information could be obtained to supplement a MEDLINE literature search. Results We identified 111 databases containing prevalence data, 67 databases specific to a single gene and only 13 that contained information on gene-disease associations. Most of the latter 13 databases were linked to MEDLINE, although five contained information that may not be available from other sources. Conclusion There is no single resource of structured data from genetic association studies covering multiple diseases, and in relation to the number of studies being conducted there is very little information specific to gene-disease association studies currently available on the World Wide Web. Until comprehensive data repositories are created and utilized regularly, new data will remain largely inaccessible to many systematic review authors and meta-analysts.

  16. Testing for multiple invasion routes and source populations for the invasive brown treesnake (Boiga irregularis) on Guam: implications for pest management

    Science.gov (United States)

    Richmond, Jonathan Q.; Wood, Dustin A.; Stanford, James W.; Fisher, Robert N.

    2014-01-01

    The brown treesnake (Boiga irregularis) population on the Pacific island of Guam has reached iconic status as one of the most destructive invasive species of modern times, yet no published works have used genetic data to identify a source population. We used DNA sequence data from multiple genetic markers and coalescent-based phylogenetic methods to place the Guam population within the broader phylogeographic context of B. irregularis across its native range and tested whether patterns of genetic variation on the island are consistent with one or multiple introductions from different source populations. We also modeled a series of demographic scenarios that differed in the effective size and duration of a population bottleneck immediately following the invasion on Guam, and measured the fit of these simulations to the observed data using approximate Bayesian computation. Our results exclude the possibility of serial introductions from different source populations, and instead verify a single origin from the Admiralty Archipelago off the north coast of Papua New Guinea. This finding is consistent with the hypothesis thatB. irregularis was accidentally transported to Guam during military relocation efforts at the end of World War II. Demographic model comparisons suggest that multiple snakes were transported to Guam from the source locality, but that fewer than 10 individuals could be responsible for establishing the population. Our results also provide evidence that low genetic diversity stemming from the founder event has not been a hindrance to the ecological success of B. irregularis on Guam, and at the same time offers a unique ‘genetic opening’ to manage snake density using classical biological approaches.

  17. Database of groundwater levels and hydrograph descriptions for the Nevada Test Site area, Nye County, Nevada

    Science.gov (United States)

    Elliott, Peggy E.; Fenelon, Joseph M.

    2010-01-01

    A database containing water levels measured from wells in and near areas of underground nuclear testing at the Nevada Test Site was developed. The water-level measurements were collected from 1941 to 2016. The database provides information for each well including well construction, borehole lithology, units contributing water to the well, and general site remarks. Water-level information provided in the database includes measurement source, status, method, accuracy, and specific water-level remarks. Additionally, the database provides hydrograph narratives that document the water-level history and describe and interpret the water-level hydrograph for each well.Water levels in the database were quality assured and analyzed. Multiple conditions were assigned to each water-level measurement to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed to each water-level measurement.

  18. The RIKEN integrated database of mammals.

    Science.gov (United States)

    Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

    2011-01-01

    The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.

  19. Multiple-source tracking: Investigating sources of pathogens, nutrients, and sediment in the Upper Little River Basin, Kentucky, water years 2013–14

    Science.gov (United States)

    Crain, Angela S.; Cherry, Mac A.; Williamson, Tanja N.; Bunch, Aubrey R.

    2017-09-20

    The South Fork Little River (SFLR) and the North Fork Little River (NFLR) are two major headwater tributaries that flow into the Little River just south of Hopkinsville, Kentucky. Both tributaries are included in those water bodies in Kentucky and across the Nation that have been reported with declining water quality. Each tributary has been listed by the Kentucky Energy and Environment Cabinet—Kentucky Division of Water in the 303(d) List of Waters for Kentucky Report to Congress as impaired by nutrients, pathogens, and sediment for contact recreation from point and nonpoint sources since 2002. In 2009, the Kentucky Energy and Environment Cabinet—Kentucky Division of Water developed a pathogen total maximum daily load (TMDL) for the Little River Basin including the SFLR and NFLR Basins. Future nutrient and suspended-sediment TMDLs are planned once nutrient criteria and suspended-sediment protocols have been developed for Kentucky. In this study, different approaches were used to identify potential sources of fecal-indicator bacteria (FIB), nitrate, and suspended sediment; to inform the TMDL process; and to aid in the implementation of effective watershed-management activities. The main focus of source identification was in the SFLR Basin.To begin understanding the potential sources of fecal contamination, samples were collected at 19 sites for densities of FIB (E. coli) in water and fluvial sediment and at 11 sites for Bacteroidales genetic markers (General AllBac, human HF183, ruminant BoBac, canid BacCan, and waterfowl GFD) during the recreational season (May through October) in 2013 and 2014. Results indicated 34 percent of all E. coli water samples (n=227 samples) did not meet the U.S. Environmental Protection Agency 2012 recommended national criteria for primary recreational waters. No criterion currently exists for E. coli in fluvial sediment. By use of the Spearman’s rank correlation test, densities of FIB in fluvial sediments were observed to have a

  20. Characteristics of Atmospheric Heat Sources over Asia in Summer:Comparison of Results Calculated Using Multiple Reanalysis Datasets

    Institute of Scientific and Technical Information of China (English)

    ZHANG Bo; CHEN Longxun; HE Jinhai; ZHU Congwen; LI Wei

    2009-01-01

    Using 1979-2000 daily NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis data (version 1, hereafter referred to as NCEP1; version 2, hereafter referred to as NCEP2), ECMWF (European Center for Medium-range Weather Forecasts) reanalysis data (ERA), and the Global Asian Monsoon Experiment (GAME) reanalysis data in summer 1998, the vertically integrated heat source (Q1) in summer is calculated, and results obtained using different datasets are com-pared. The distributions of (Q1) calculated by using NCEP1 are in good agreement with rainfall observations over the Arabian Sea/Indian Peninsula, the Bay of Bengal (BOB), and East China. The distributions of (Q1)revealed by using NCEP2 are unrealistic in the southern Indian Peninsula, the BOB, and the South China Sea. Using ERA, the heat sources over the tropical Asia are in accordance with the summer precipitation,however, the distributions of (Q1) in East China are unreasonable. In the tropical region, the distributions of the summer heat source given by NCEP1 and ERA seem to be more accurate than those revealed by NCEP2. The NCEP1 and NCEP2 data are better for calculating heat sources over the subtropical and eastern regions of mainland China.

  1. Improving and Testing Regional Attenuation and Spreading Models Using Well-Constrained Source Terms, Multiple Methods and Datasets

    Science.gov (United States)

    2013-07-03

    and ABKT for event 13117 (right), using Q corrections from fitting source-corrected spectra (labeled MDF ), and from tomography by LANL and LLNL (see...spectra (labeled MDF ), and from tomography by LANL and LLNL (see legends). 2.2. Data Quality Control Second, data quality directly impacts

  2. On-site passive flux sampler measurement of emission rates of carbonyls and VOCs from multiple indoor sources

    Energy Technology Data Exchange (ETDEWEB)

    Shinohara, Naohide [Research Institute of Science for Safety and Sustainability (RISS), National Institute of Advanced Industrial Science and Technology (AIST), 16-1 Onogawa, Tsukuba City, Ibaraki 305-8569 (Japan); Kai, Yuya; Mizukoshi, Atsushi; Kumagai, Kazukiyo; Okuizumi, Yumiko; Jona, Miki; Yanagisawa, Yukio [Department of Environment Systems, Institute of Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, 5-1-5 Kashiwa-no-ha, Kashiwa-shi, Chiba 277-8563 (Japan); Fujii, Minoru [Research Center for Material Cycles and Waste Management, National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba City, Ibaraki 305-8506 (Japan)

    2009-05-15

    In indoor environments with high levels of air pollution, it is desirable to remove major sources of emissions to improve air quality. In order to identify the emission sources that contribute most to the concentrations of indoor air pollutants, we used passive flux samplers (PFSs) to measure emission rates of carbonyl compounds and volatile organic compounds (VOCs) from many of the building materials and furnishings present in a room in a reinforced concrete building in Tokyo, Japan. The emission flux of formaldehyde from a desk was high (125 {mu}g/m{sup 2}/h), whereas fluxes from a door and flooring were low (21.5 and 16.5 {mu}g/m{sup 2}/h, respectively). The emission fluxes of toluene from the ceiling and the carpet were high (80.0 and 72.3 {mu}g/m{sup 2}/h, respectively), whereas that from the flooring was low (9.09 {mu}g/m{sup 2}/h). The indoor and outdoor concentrations of formaldehyde were 61.5 and 8.64 {mu}g/m{sup 3}, respectively, and those of toluene were 43.2 and 17.5 {mu}g/m{sup 3}, respectively. The air exchange rate of the room as measured by the perfluorocarbon tracer (PFT) method was 1.84/h. Taking into consideration the area of the emission sources, the carpet, ceiling, and walls were identified as the principal emission sources, contributing 24%, 20%, and 22% of the formaldehyde, respectively, and 22%, 27%, and 14% of the toluene, respectively, assuming that the emission rate from every major emission sources could be measured. In contrast, the door, the flooring, and the desk contributed little to the indoor levels of formaldehyde (1.0%, 0.54%, and 4.1%, respectively) and toluene (2.2%, 0.31%, and 0.85%, respectively). (author)

  3. Use of the Medicare database in epidemiologic and health services research: a valuable source of real-world evidence on the older and disabled populations in the US

    Directory of Open Access Journals (Sweden)

    Mues KE

    2017-05-01

    Full Text Available Katherine E Mues,1 Alexander Liede,1 Jiannong Liu,2 James B Wetmore,2 Rebecca Zaha,1 Brian D Bradbury,1 Allan J Collins,2 David T Gilbertson2 1Center for Observational Research, Amgen Inc., Thousand Oaks and San Francisco, CA, 2Chronic Disease Research Group, Minneapolis, MN, USA Abstract: Medicare is the federal health insurance program for individuals in the US who are aged ≥65 years, select individuals with disabilities aged <65 years, and individuals with end-stage renal disease. The Centers for Medicare and Medicaid Services grants researchers access to Medicare administrative claims databases for epidemiologic and health outcomes research. The data cover beneficiaries’ encounters with the health care system and receipt of therapeutic interventions, including medications, procedures, and services. Medicare data have been used to describe patterns of morbidity and mortality, describe burden of disease, compare effectiveness of pharmacologic therapies, examine cost of care, evaluate the effects of provider practices on the delivery of care and patient outcomes, and explore the health impacts of important Medicare policy changes. Considering that the vast majority of US citizens ≥65 years of age have Medicare insurance, analyses of Medicare data are now essential for understanding the provision of health care among older individuals in the US and are critical for providing real-world evidence to guide decision makers. This review is designed to provide researchers with a summary of Medicare data, including the types of data that are captured, and how they may be used in epidemiologic and health outcomes research. We highlight strengths, limitations, and key considerations when designing a study using Medicare data. Additionally, we illustrate the potential impact that Centers for Medicare and Medicaid Services policy changes may have on data collection, coding, and ultimately on findings derived from the data. Keywords: Medicare

  4. Brain source localization: a new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements.

    Science.gov (United States)

    Vergallo, P; Lay-Ekuakille, A

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  5. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  6. Relationship between multiple sources of perceived social support and psychological and academic adjustment in early adolescence: comparisons across gender.

    Science.gov (United States)

    Rueger, Sandra Yu; Malecki, Christine Kerres; Demaray, Michelle Kilpatrick

    2010-01-01

    The current study investigated gender differences in the relationship between sources of perceived support (parent, teacher, classmate, friend, school) and psychological and academic adjustment in a sample of 636 (49% male) middle school students. Longitudinal data were collected at two time points in the same school year. The study provided psychometric support for the Child and Adolescent Social Support Scale (Malecki et al., A working manual on the development of the Child and Adolescent Social Support Scale (2000). Unpublished manuscript, Northern Illinois University, 2003) across gender, and demonstrated gender differences in perceptions of support in early adolescence. In addition, there were significant associations between all sources of support with depressive symptoms, anxiety, self-esteem, and academic adjustment, but fewer significant unique effects of each source. Parental support was a robust unique predictor of adjustment for both boys and girls, and classmates' support was a robust unique predictor for boys. These results illustrate the importance of examining gender differences in the social experience of adolescents with careful attention to measurement and analytic issues.

  7. Optical Identification of Multiple Faint X-ray Sources in the Globular Cluster NGC 6752 Evidence for Numerous Cataclysmic Variables

    CERN Document Server

    Pooley, D; Homer, L; Verbunt, F; Anderson, S F; Gaensler, B M; Margon, B; Miller, J; Fox, D W; Kaspi, V M; Van der Klis, M

    2002-01-01

    We report on the Chandra ACIS-S3 imaging observation of the globular cluster NGC 6752. We detect 6 X-ray sources within the 10.5" core radius and 13 more within the 115" half-mass radius down to a limiting luminosity of Lx approx 10^{30} erg/s for cluster sources. We reanalyze archival data from the Hubble Space Telescope and the Australia Telescope Compact Array and make 12 optical identifications and one radio identification. Based on X-ray and optical properties of the identifications, we find 10 likely cataclysmic variables (CVs), 1-3 likely RS CVn or BY Dra systems, and 1 or 2 possible background objects. Of the 7 sources for which no optical identifications were made, we expect that ~2-4 are background objects and that the rest are either CVs or some or all of the 5 millisecond pulsars whose radio positions are not yet accurately known. These and other Chandra results on globular clusters indicate that the dozens of CVs per cluster expected by theoretical arguments are finally being found. The findings ...

  8. Development of multiple source data processing for structural analysis at a regional scale. [digital remote sensing in geology

    Science.gov (United States)

    Carrere, Veronique

    1990-01-01

    Various image processing techniques developed for enhancement and extraction of linear features, of interest to the structural geologist, from digital remote sensing, geologic, and gravity data, are presented. These techniques include: (1) automatic detection of linear features and construction of rose diagrams from Landsat MSS data; (2) enhancement of principal structural directions using selective filters on Landsat MSS, Spacelab panchromatic, and HCMM NIR data; (3) directional filtering of Spacelab panchromatic data using Fast Fourier Transform; (4) detection of linear/elongated zones of high thermal gradient from thermal infrared data; and (5) extraction of strong gravimetric gradients from digitized Bouguer anomaly maps. Processing results can be compared to each other through the use of a geocoded database to evaluate the structural importance of each lineament according to its depth: superficial structures in the sedimentary cover, or deeper ones affecting the basement. These image processing techniques were successfully applied to achieve a better understanding of the transition between Provence and the Pyrenees structural blocks, in southeastern France, for an improved structural interpretation of the Mediterranean region.

  9. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, Ross F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  10. Wavelet-transform-based power management of hybrid vehicles with multiple on-board energy sources including fuel cell, battery and ultracapacitor

    Science.gov (United States)

    Zhang, Xi; Mi, Chris Chunting; Masrur, Abul; Daniszewski, David

    A wavelet-transform-based strategy is proposed for the power management of hybrid electric vehicles (HEV) with multiple on-board energy sources and energy storage systems including a battery, a fuel cell, and an ultra-capacitor. The proposed wavelet-transform algorithm is capable of identifying the high-frequency transient and real time power demand of the HEV, and allocating power components with different frequency contents to corresponding sources to achieve an optimal power management control algorithm. By using the wavelet decomposition algorithm, a proper combination can be achieved with a properly sized ultra-capacitor dealing with the chaotic high-frequency components of the total power demand, while the fuel cell and battery deal with the low and medium frequency power demand. Thus the system efficiency and life expectancy can be greatly extended. Simulation and experimental results validated the effectiveness of wavelet-transform-based power management algorithm.

  11. SIAPEM - Brazilian Software Database for Multiple Sclerosis ...

    African Journals Online (AJOL)

    ... en el estudio multicéntrico que se denominó : \\"Proyecto Atlántico Sur\\" y que ... pasos para la recolección de los datos clínicos, epidemiológicos y de evolución de la ... los factores macro-micro-medio ambientales y los datos del laboratorio.

  12. The combined application of organic sulphur and isotope geochemistry to assess multiple sources of palaeobiochemicals with identical carbon skeletons

    Science.gov (United States)

    Kohnen, M. E.; Schouten, S.; Sinninghe Damste, J. S.; de Leeuw, J. W.; Merrit, D.; Hayes, J. M.

    1992-01-01

    Five immature sediments from a Messinian evaporitic basin, representing one evaporitic cycle, were studied using molecular organic sulphur and isotope geochemistry. It is shown that a specific carbon skeleton which is present in different "modes of occurrence" ("free" hydrocarbon, alkylthiophene, alkylthiolane, alkyldithiane, alkylthiane, and sulphur-bound in macromolecules) may have different biosynthetic precursors which are possibly derived from different biota. It is demonstrated that the mode of occurrence and the carbon isotopic composition of a sedimentary lipid can be used to "reconstruct" its biochemical precursor. This novel approach of recognition of the suite of palaeobiochemicals present during the time of deposition allows for identification of the biological sources with an unprecedented specificity.

  13. Three-dimensional printing of X-ray computed tomography datasets with multiple materials using open-source data processing.

    Science.gov (United States)

    Sander, Ian M; McGoldrick, Matthew T; Helms, My N; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W Matthew

    2017-07-01

    Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing has the potential to advance learning, many academic programs have been slow to adopt its use in the classroom despite increased availability of the equipment and digital databases already established for educational use. Herein, a protocol is reported for the production of enlarged bone core and accurate representation of human sinus passages in a 3D printed format using entirely consumer-grade printers and a combination of free-software platforms. The comparative resolutions of three surface rendering programs were also determined using the sinuses, a human body, and a human wrist data files to compare the abilities of different software available for surface map generation of biomedical data. Data shows that 3D Slicer provided highest compatibility and surface resolution for anatomical 3D printing. Generated surface maps were then 3D printed via fused deposition modeling (FDM printing). In conclusion, a methodological approach that explains the production of anatomical models using entirely consumer-grade, fused deposition modeling machines, and a combination of free software platforms is presented in this report. The methods outlined will facilitate the incorporation of 3D printed anatomical models in the classroom. Anat Sci Educ 10: 383-391. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  14. Development of a Broadly Tunable Mid-Infrared Laser Source for Highly Sensitive Multiple Trace Gas Detection

    Institute of Scientific and Technical Information of China (English)

    Xiao-Juan Cui; Feng-Zhong Dong; Yu-Jun Zhang; Rui-Feng Kan; Yi-Ben Cui; Min Wang; Dong Chen; Jian-Guo Liu; Wen-Qing Liu

    2008-01-01

    A room-temperature broadly tunable mid-infrared difference frequency laser source for highly sensitive trace gas detection has been developed recently in our laboratory. The mid-infrared laser system is based on quasi-phase-matched (QPM) difference frequency generation (DFG) in a multi-grating, temperature-controlled periodically poled LiNbO3 (PPLN) crystal and employs two near-infrared diode lasers as pump sources. The mid-infrared coherent radiation generated is tunable from 3.2 μm to 3.7 μm with an output power of about 100 μW. By changing one of the pump laser head with another wavelength range, we can readily obtain other needed mid-infrared wavelength range cover. The performance of the mid-infrared laser system and its application to highly sensitive spectroscopic detection of CH4, HCI, CH2O, and NO2 has been carried out. A multi-reflection White cell was used in the experiment gaining ppb-level sensitivity. The DFG laser system has the features of compact, room-temperature operation, narrow line-width, and broadly continuous tunable range for potential applications in industry and environmental monitoring.

  15. Mapping the sources of the seismic wave field at Kilauea volcano, Hawaii, using data recorded on multiple seismic Antennas

    Science.gov (United States)

    Almendros, J.; Chouet, B.; Dawson, P.; Huber, Caleb G.

    2002-01-01

    Seismic antennas constitute a powerful tool for the analysis of complex wave fields. Well-designed antennas can identify and separate components of a complex wave field based on their distinct propagation properties. The combination of several antennas provides the basis for a more complete understanding of volcanic wave fields, including an estimate of the location of each individual wave-field component identified simultaneously by at least two antennas. We used frequency-slowness analyses of data from three antennas to identify and locate the different components contributing to the wave fields recorded at Kilauea volcano, Hawaii, in February 1997. The wave-field components identified are (1) a sustained background volcanic tremor in the form of body waves generated in a shallow hydrothermal system located below the northeastern edge of the Halemaumau pit crater; (2) surface waves generated along the path between this hydrothermal source and the antennas; (3) back-scattered surface wave energy from a shallow reflector located near the southeastern rim of Kilauea caldera; (4) evidence for diffracted wave components originating at the southeastern edge of Halemaumau; and (5) body waves reflecting the activation of a deeper tremor source between 02 hr 00 min and 16 hr 00 min Hawaii Standard Time on 11 February.

  16. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  17. Multiple sulphur and lead sources recorded in hydrothermal exhalites associated with the Lemarchant volcanogenic massive sulphide deposit, central Newfoundland, Canada

    Science.gov (United States)

    Lode, Stefanie; Piercey, Stephen J.; Layne, Graham D.; Piercey, Glenn; Cloutier, Jonathan

    2016-04-01

    Metalliferous sedimentary rocks (mudstones, exhalites) associated with the Cambrian precious metal-bearing Lemarchant Zn-Pb-Cu-Au-Ag-Ba volcanogenic massive sulphide (VMS) deposit, Tally Pond volcanic belt, precipitated both before and after VMS mineralization. Sulphur and Pb isotopic studies of sulphides within the Lemarchant exhalites provide insight into the sources of S and Pb in the exhalites as a function of paragenesis and evolution of the deposit and subsequent post-depositional modification. In situ S isotope microanalyses of polymetallic sulphides (euhedral and framboidal pyrite, anhedral chalcopyrite, pyrrhotite, galena and euhedral arsenopyrite) by secondary ion mass spectrometry (SIMS) yielded δ34S values ranging from -38.8 to +14.4 ‰, with an average of ˜ -12.8 ‰. The δ34S systematics indicate sulphur was predominantly biogenically derived via microbial/biogenic sulphate reduction of seawater sulphate, microbial sulphide oxidation and microbial disproportionation of intermediate S compounds. These biogenic processes are coupled and occur within layers of microbial mats consisting of different bacterial/archaeal species, i.e., sulphate reducers, sulphide oxidizers and those that disproportionate sulphur compounds. Inorganic processes or sources (i.e., thermochemical sulphate reduction of seawater sulphate, leached or direct igneous sulphur) also contributed to the S budget in the hydrothermal exhalites and are more pronounced in exhalites that are immediately associated with massive sulphides. Galena Pb isotopic compositions by SIMS microanalysis suggest derivation of Pb from underlying crustal basement (felsic volcanic rocks of Sandy Brook Group), whereas less radiogenic Pb derived from juvenile sources leached from mafic volcanic rocks of the Sandy Brook Group and/or Tally Pond group. This requires that the hydrothermal fluids interacted with juvenile and evolved crust during hydrothermal circulation, which is consistent with the existing

  18. Web portal for dynamic creation and publication of teaching materials in multiple formats from a single source representation

    Science.gov (United States)

    Roganov, E. A.; Roganova, N. A.; Aleksandrov, A. I.; Ukolova, A. V.

    2017-01-01

    We implement a web portal which dynamically creates documents in more than 30 different formats including html, pdf and docx from a single original material source. It is obtained by using a number of free software such as Markdown (markup language), Pandoc (document converter), MathJax (library to display mathematical notation in web browsers), framework Ruby on Rails. The portal enables the creation of documents with a high quality visualization of mathematical formulas, is compatible with a mobile device and allows one to search documents by text or formula fragments. Moreover, it gives professors the ability to develop the latest technology educational materials, without qualified technicians' assistance, thus improving the quality of the whole educational process.

  19. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  20. Multilevel analysis of the influence of patients' and general practitioners' characteristics on patented versus multiple-sourced statin prescribing in France.

    Science.gov (United States)

    Pichetti, Sylvain; Sermet, Catherine; Godman, Brian; Campbell, Stephen M; Gustafsson, Lars L

    2013-06-01

    The French National Health Insurance and the Ministry of Health have introduced multiple reforms in recent years to increase prescribing efficiency. These include guidelines, academic detailing, financial incentives for the prescribing and dispensing of generics drugs as well as a voluntary pay-for-performance programme. However, the quality and efficiency of prescribing could be enhanced potentially if there was better understanding of the dynamics of prescribing behaviour in France. To analyse the patient and general practitioner characteristics that influence patented versus multiple-sourced statin prescribing in France. Statistical analysis was performed on the statin prescribing habits from 341 general practitioners (GPs) that were included in the IMS-Health Permanent Survey on Medical Prescription in France, which was conducted between 2009 and 2010 and involved 14,360 patients. Patient characteristics included their age and gender as well as five medical profiles that were constructed from the diagnoses obtained during consultations. These were (1) disorders of lipoprotein metabolism, (2) heart disease, (3) diabetes, (4) complex profiles and (5) profiles based on other diagnoses. Physician characteristics included their age, gender, solo or group practice, weekly workload and payment scheme. Patient age had a statistically significant impact on statin prescribing for patients in profile 1 (disorders of lipoprotein metabolism) and profile 3 (complex profiles) with a greater number of patented statins being prescribed for the youngest patients. For instance, patients older than 76 years with a complex profile were prescribed fewer patented statins than patients aged 68-76 years old with the same medical profile (coefficient: -0.225; p = 0.0008). By contrast, regardless of the patient's age, the medical profile did not affect the probability of prescribing a patented statin except in young patients with heart diseases who were prescribed a greater number of

  1. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  2. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  3. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  4. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  5. High throughput multiple locus variable number of tandem repeat analysis (MLVA) of Staphylococcus aureus from human, animal and food sources.

    Science.gov (United States)

    Sobral, Daniel; Schwarz, Stefan; Bergonier, Dominique; Brisabois, Anne; Feßler, Andrea T; Gilbert, Florence B; Kadlec, Kristina; Lebeau, Benoit; Loisy-Hamon, Fabienne; Treilles, Michaël; Pourcel, Christine; Vergnaud, Gilles

    2012-01-01

    Staphylococcus aureus is a major human pathogen, a relevant pathogen in veterinary medicine, and a major cause of food poisoning. Epidemiological investigation tools are needed to establish surveillance of S. aureus strains in humans, animals and food. In this study, we investigated 145 S. aureus isolates recovered from various animal species, disease conditions, food products and food poisoning events. Multiple Locus Variable Number of Tandem Repeat (VNTR) analysis (MLVA), known to be highly efficient for the genotyping of human S. aureus isolates, was used and shown to be equally well suited for the typing of animal S. aureus isolates. MLVA was improved by using sixteen VNTR loci amplified in two multiplex PCRs and analyzed by capillary electrophoresis ensuring a high throughput and high discriminatory power. The isolates were assigned to twelve known clonal complexes (CCs) and--a few singletons. Half of the test collection belonged to four CCs (CC9, CC97, CC133, CC398) previously described as mostly associated with animals. The remaining eight CCs (CC1, CC5, CC8, CC15, CC25, CC30, CC45, CC51), representing 46% of the animal isolates, are common in humans. Interestingly, isolates responsible for food poisoning show a CC distribution signature typical of human isolates and strikingly different from animal isolates, suggesting a predominantly human origin.

  6. High throughput multiple locus variable number of tandem repeat analysis (MLVA of Staphylococcus aureus from human, animal and food sources.

    Directory of Open Access Journals (Sweden)

    Daniel Sobral

    Full Text Available Staphylococcus aureus is a major human pathogen, a relevant pathogen in veterinary medicine, and a major cause of food poisoning. Epidemiological investigation tools are needed to establish surveillance of S. aureus strains in humans, animals and food. In this study, we investigated 145 S. aureus isolates recovered from various animal species, disease conditions, food products and food poisoning events. Multiple Locus Variable Number of Tandem Repeat (VNTR analysis (MLVA, known to be highly efficient for the genotyping of human S. aureus isolates, was used and shown to be equally well suited for the typing of animal S. aureus isolates. MLVA was improved by using sixteen VNTR loci amplified in two multiplex PCRs and analyzed by capillary electrophoresis ensuring a high throughput and high discriminatory power. The isolates were assigned to twelve known clonal complexes (CCs and--a few singletons. Half of the test collection belonged to four CCs (CC9, CC97, CC133, CC398 previously described as mostly associated with animals. The remaining eight CCs (CC1, CC5, CC8, CC15, CC25, CC30, CC45, CC51, representing 46% of the animal isolates, are common in humans. Interestingly, isolates responsible for food poisoning show a CC distribution signature typical of human isolates and strikingly different from animal isolates, suggesting a predominantly human origin.

  7. Connecting multiple clouds and mixing real and virtual resources via the open source WNoDeS framework

    CERN Document Server

    CERN. Geneva; Italiano, Alessandro

    2012-01-01

    In this paper we present the latest developments introduced in the WNoDeS framework (http://web.infn.it/wnodes); we will in particular describe inter-cloud connectivity, support for multiple batch systems, and coexistence of virtual and real environments on a single hardware. Specific effort has been dedicated to the work needed to deploy a "multi-sites" WNoDeS installation. The goal is to give end users the possibility to submit requests for resources using cloud interfaces on several sites in a transparent way. To this extent, we will show how we have exploited already existing and deployed middleware within the framework of the IGI (Italian Grid Initiative) and EGI (European Grid Infrastructure) services. In this context, we will also describe the developments that have taken place in order to have the possibility to dynamically exploit public cloud services like Amazon EC2. The latter gives WNoDeS the capability to serve, for example, part of the user requests through external computing resources when ne...

  8. CycADS: an annotation database system to ease the development and update of BioCyc databases.

    Science.gov (United States)

    Vellozo, Augusto F; Véron, Amélie S; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org.

  9. Utilization of host iron sources by Corynebacterium diphtheriae: multiple hemoglobin-binding proteins are essential for the use of iron from the hemoglobin-haptoglobin complex.

    Science.gov (United States)

    Allen, Courtni E; Schmitt, Michael P

    2015-02-01

    The use of hemin iron by Corynebacterium diphtheriae requires the DtxR- and iron-regulated ABC hemin transporter HmuTUV and the secreted Hb-binding protein HtaA. We recently described two surface anchored proteins, ChtA and ChtC, which also bind hemin and Hb. ChtA and ChtC share structural similarities to HtaA; however, a function for ChtA and ChtC was not determined. In this study, we identified additional host iron sources that are utilized by C. diphtheriae. We show that several C. diphtheriae strains use the hemoglobin-haptoglobin (Hb-Hp) complex as an iron source. We report that an htaA deletion mutant of C. diphtheriae strain 1737 is unable to use the Hb-Hp complex as an iron source, and we further demonstrate that a chtA-chtC double mutant is also unable to use Hb-Hp iron. Single-deletion mutants of chtA or chtC use Hb-Hp iron in a manner similar to that of the wild type. These findings suggest that both HtaA and either ChtA or ChtC are essential for the use of Hb-Hp iron. Enzyme-linked immunosorbent assay (ELISA) studies show that HtaA binds the Hb-Hp complex, and the substitution of a conserved tyrosine (Y361) for alanine in HtaA results in significantly reduced binding. C. diphtheriae was also able to use human serum albumin (HSA) and myoglobin (Mb) but not hemopexin as iron sources. These studies identify a biological function for the ChtA and ChtC proteins and demonstrate that the use of the Hb-Hp complex as an iron source by C. diphtheriae requires multiple iron-regulated surface components.

  10. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    Science.gov (United States)

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for

  11. Sr isotope tracing of multiple water sources in a complex river system, Noteć River, central Poland

    Energy Technology Data Exchange (ETDEWEB)

    Zieliński, Mateusz, E-mail: mateusz.zielinski@amu.edu.pl [Institute of Geoecology and Geoinformation, Adam Mickiewicz University, Dzięgielowa 27, 61-680 Poznań (Poland); Dopieralska, Jolanta, E-mail: dopieralska@amu.edu.pl [Poznań Science and Technology Park, Adam Mickiewicz University Foundation, Rubież 46, 61-612 Poznań (Poland); Belka, Zdzislaw, E-mail: zbelka@amu.edu.pl [Isotope Laboratory, Adam Mickiewicz University, Dzięgielowa 27, 61-680 Poznań (Poland); Walczak, Aleksandra, E-mail: awalczak@amu.edu.pl [Isotope Laboratory, Adam Mickiewicz University, Dzięgielowa 27, 61-680 Poznań (Poland); Siepak, Marcin, E-mail: siep@amu.edu.pl [Institute of Geology, Adam Mickiewicz University, Maków Polnych 16, 61-606 Poznań (Poland); Jakubowicz, Michal, E-mail: mjakub@amu.edu.pl [Institute of Geoecology and Geoinformation, Adam Mickiewicz University, Dzięgielowa 27, 61-680 Poznań (Poland)

    2016-04-01

    Anthropogenic impact on surface waters and other elements in the environment was investigated in the Noteć River basin in central Poland. The approach was to trace changes in the Sr isotope composition ({sup 87}Sr/{sup 86}Sr) and concentration in space and time. Systematic sampling of the river water shows a very wide range of {sup 87}Sr/{sup 86}Sr ratios, from 0.7089 to 0.7127. This strong variation, however, is restricted to the upper course of the river, whereas the water in the lower course typically shows {sup 87}Sr/{sup 86}Sr values around 0.7104–0.7105. Variations in {sup 87}Sr/{sup 86}Sr are associated with a wide range of Sr concentrations, from 0.14 to 1.32 mg/L. We find that strong variations in {sup 87}Sr/{sup 86}Sr and Sr concentrations can be accounted for by mixing of two end-members: 1) atmospheric waters charged with Sr from the near-surface weathering and wash-out of Quaternary glaciogenic deposits, and 2) waters introduced into the river from an open pit lignite mine. The first reservoir is characterized by a low Sr content and high {sup 87}Sr/{sup 86}Sr ratios, whereas mine waters display opposite characteristics. Anthropogenic pollution is also induced by extensive use of fertilizers which constitute the third source of Sr in the environment. The study has an important implication for future archeological studies in the region. It shows that the present-day Sr isotope signatures of river water, flora and fauna cannot be used unambiguously to determine the “baseline” for bioavailable {sup 87}Sr/{sup 86}Sr in the past. - Highlights: • Sr isotopes fingerprint water sources and their interactions in a complex river system. • Mine waters and fertilizers are critical anthropogenic additions in the river water. • Limited usage of environmental isotopic data in archeological studies. • Sr budget of the river is dynamic and temporary.

  12. Development of a flattening filter free multiple source model for use as an independent, Monte Carlo, dose calculation, quality assurance tool for clinical trials.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm(2) to 40 × 40 cm(2) . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  13. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  14. EuroFIR-BASIS - a combined composition and biological activity database for bioactive compounds in plant-based foods

    DEFF Research Database (Denmark)

    Gry, Jørn; Black, Lucinda; Eriksen, Folmer Damsted

    2007-01-01

    Mounting evidence suggests that certain non-nutrient bioactive compounds promote optimal human health and reduce the risk of chronic disease. An Internet-deployed database, EuroFIR-BASIS, which uniquely combines food composition and biological effects data for plant-based bioactive compounds......, is being developed. The database covers multiple compound classes and 330 major food plants and their edible parts with data sourced from quality-assessed, peer-reviewed literature. The database will be a valuable resource for food regulatory and advisory bodies, risk authorities, epidemiologists...... and researchers interested in diet and health relationships, and product developers within the food industry....

  15. In vivo measurement of non-keratinized squamous epithelium using a spectroscopic microendoscope with multiple source-detector separations

    Science.gov (United States)

    Greening, Gage J.; Rajaram, Narasimhan; Muldoon, Timothy J.

    2016-03-01

    In the non-keratinized epithelia, dysplasia typically arises near the basement membrane and proliferates into the upper epithelial layers over time. We present a non-invasive, multimodal technique combining high-resolution fluorescence imaging and broadband sub-diffuse reflectance spectroscopy (sDRS) to monitor health at various tissue layers. This manuscript focuses on characterization of the sDRS modality, which contains two source-detector separations (SDSs) of 374 μm and 730 μm, so that it can be used to extract in vivo optical parameters from human oral mucosa at two tissue thicknesses. First, we present empirical lookup tables (LUTs) describing the relationship between reduced scattering (μs') and absorption coefficients (μa) and absolute reflectance. LUTS were shown to extract μs' and μa with accuracies of approximately 4% and 8%, respectively. We then present LUTs describing the relationship between μs', μa and sampling depth. Sampling depths range between 210-480 and 260-620 μm for the 374 and 730 μm SDSs, respectively. We then demonstrate the ability to extract in vivo μs', μa, hemoglobin concentration, bulk tissue oxygen saturation, scattering exponent, and sampling depth from the inner lip of thirteen healthy volunteers to elucidate the differences in the extracted optical parameters from each SDS (374 and 730 μm) within non-keratinized squamous epithelia.

  16. The 2008 Wells, Nevada earthquake sequence: Source constraints using calibrated multiple event relocation and InSAR

    Science.gov (United States)

    Nealy, Jennifer; Benz, Harley M.; Hayes, Gavin; Berman, Eric; Barnhart, William

    2017-01-01

    The 2008 Wells, NV earthquake represents the largest domestic event in the conterminous U.S. outside of California since the October 1983 Borah Peak earthquake in southern Idaho. We present an improved catalog, magnitude complete to 1.6, of the foreshock-aftershock sequence, supplementing the current U.S. Geological Survey (USGS) Preliminary Determination of Epicenters (PDE) catalog with 1,928 well-located events. In order to create this catalog, both subspace and kurtosis detectors are used to obtain an initial set of earthquakes and associated locations. The latter are then calibrated through the implementation of the hypocentroidal decomposition method and relocated using the BayesLoc relocation technique. We additionally perform a finite fault slip analysis of the mainshock using InSAR observations. By combining the relocated sequence with the finite fault analysis, we show that the aftershocks occur primarily updip and along the southwestern edge of the zone of maximum slip. The aftershock locations illuminate areas of post-mainshock strain increase; aftershock depths, ranging from 5 to 16 km, are consistent with InSAR imaging, which shows that the Wells earthquake was a buried source with no observable near-surface offset.

  17. Gender differences in drunk driving prevalence rates and trends: a 20-year assessment using multiple sources of evidence.

    Science.gov (United States)

    Schwartz, Jennifer

    2008-09-01

    This research tracked women's and men's drunk driving rates and the DUI sex ratio in the United States from 1982-2004 using three diverse sources of evidence. Sex-specific prevalence estimates and the sex ratio are derived from official arrest statistics from the Federal Bureau of Investigation, self-reports from the Centers for Disease Control and Prevention, and traffic fatality data from the National Highway and Transportation Safety Administration. Drunk driving trends were analyzed using Augmented Dickey Fuller time series techniques. Female DUI arrest rates increased whereas male rates declined then stabilized, producing a significantly narrower sex ratio. According to self-report and traffic data, women's and men's drunk driving rates declined and the gender gap was unchanged. Women's overrepresentation in arrests relative to their share of offending began in the 1990s and accelerated in 2000. Women's arrest gains, contrasted with no systematic change in DUI behavior, and the timing of this shift suggest an increased vulnerability to arrest. More stringent laws and enforcement directed at less intoxicated offenders may inadvertently target female offending patterns.

  18. Fossil DNA Stratigraphy revealed Multiple Sources of Alkenones in the Holocene Black Sea at the Strain Level: Implications for UK37 Paleothermometry

    Science.gov (United States)

    Coolen, M. J.; Saenz, J. P.; Trowbridge, N.; Eglinton, T.

    2007-12-01

    The fossil distribution of long-chain alkenones is now a widely accepted tool to reconstruct past sea surface temperatures (SST) (i.e. UK37-index). In most studies, the UK37 index is calibrated for the main source of alkenones, the coccolithophorid haptophyte Emiliania huxleyi. Besides temperature, additional factors such as salinity, growth conditions, or different or multiple biological sources seem to influence the level of unsaturation of alkenones and the reliability of the UK37-inferred SST. The Black Sea is an interesting setting to study such factors since unreliable SST were reconstructed from the Holocene sapropel with high concentrations of an unusual "Black Sea" alkenone (C36:2 ethyl ketone) whereas calcium-bearing microfossils (coccoliths) of haptophytes are lacking. To identify Holocene sources for alkenones in the Black Sea at the unprecedented strain-level and to refine paleoenvironmental conditions, we searched for multiple fossil genetic signatures of haptophytes. This revealed that the slow increase in salinity as a result of post-glacial introduction of Mediterranean waters in the paleo lacustrine Black Sea, caused a succession between alkenone-biosynthesizing haptophytes from Isochrysis spp. (which do not produce coccoliths), to a mixture of Isochrysis and E huxleyi strains, then only E. huxleyi strains, and when the salinity reached a threshold of 18 per mille at 3000 years BP, the fossilized calcium-bearing E. huxleyi strain was introduced. At least 11 E. huxleyi strains were identified and the first non-fossilizing strains already colonized the Black Sea 4000 years before the fossilized calcium-bearing strain appeared. Most E. huxleyi strains were likely sources of C36:2 eK but the presence of one fossil "phylotype" coincided with the highest levels of this unusual alkenone ( more than 80 percent of the total alkenone content) and unreliable past SST (varying between 5 and 30 degrees C; 7500-5500 years BP). C36:2 eK was not biosynthesized by

  19. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2014-01-01

    Full Text Available The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MySQL running on Linux as the destination. The method applied in this research is prototyping in which the processes of development and testing can be done interactively and repeatedly. The key result of this research is that the replication technology applied, which is called Oracle GoldenGate, can successfully manage to do its task in replicating data in real-time and heterogeneous platforms.

  20. Multiple linear regression model for bromate formation based on the survey data of source waters from geographically different regions across China.

    Science.gov (United States)

    Yu, Jianwei; Liu, Juan; An, Wei; Wang, Yongjing; Zhang, Junzhi; Wei, Wei; Su, Ming; Yang, Min

    2015-01-01

    A total of 86 source water samples from 38 cities across major watersheds of China were collected for a bromide (Br(-)) survey, and the bromate (BrO3 (-)) formation potentials (BFPs) of 41 samples with Br(-) concentration >20 μg L(-1) were evaluated using a batch ozonation reactor. Statistical analyses indicated that higher alkalinity, hardness, and pH of water samples could lead to higher BFPs, with alkalinity as the most important factor. Based on the survey data, a multiple linear regression (MLR) model including three parameters (alkalinity, ozone dose, and total organic carbon (TOC)) was established with a relatively good prediction performance (model selection criterion = 2.01, R (2) = 0.724), using logarithmic transformation of the variables. Furthermore, a contour plot was used to interpret the influence of alkalinity and TOC on BrO3 (-) formation with prediction accuracy as high as 71 %, suggesting that these two parameters, apart from ozone dosage, were the most important ones affecting the BFPs of source waters with Br(-) concentration >20 μg L(-1). The model could be a useful tool for the prediction of the BFPs of source water.