WorldWideScience

Sample records for distributed heterogeneous database

  1. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  2. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  3. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  4. Replikasi Unidirectional pada Heterogen Database

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2013-12-01

    Full Text Available The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the interaction process is done through repeated. From this research it is obtained that the database replication technolgy using Oracle Golden Gate can be applied in heterogeneous environments in real time as well.

  5. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  6. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  7. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    Science.gov (United States)

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  8. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  9. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  10. Ontology based heterogeneous materials database integration and semantic query

    Science.gov (United States)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  11. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    Science.gov (United States)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  12. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  13. Tradeoffs in distributed databases

    OpenAIRE

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  14. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  15. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  16. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  17. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  18. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  19. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  20. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  1. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  2. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  3. The design of distributed database system for HIRFL

    International Nuclear Information System (INIS)

    Wang Hong; Huang Xinmin

    2004-01-01

    This paper is focused on a kind of distributed database system used in HIRFL distributed control system. The database of this distributed database system is established by SQL Server 2000, and its application system adopts the Client/Server model. Visual C ++ is used to develop the applications, and the application uses ODBC to access the database. (authors)

  4. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  5. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  6. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  7. Measuring the effects of heterogeneity on distributed systems

    Science.gov (United States)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  8. A performance study on the synchronisation of heterogeneous Grid databases using CONStanza

    CERN Document Server

    Pucciani, G; Domenici, Andrea; Stockinger, Heinz

    2010-01-01

    In Grid environments, several heterogeneous database management systems are used in various administrative domains. However, data exchange and synchronisation need to be available across different sites and different database systems. In this article we present our data consistency service CONStanza and give details on how we achieve relaxed update synchronisation between different database implementations. The integration in existing Grid environments is one of the major goals of the system. Performance tests have been executed following a factorial approach. Detailed experimental results and a statistical analysis are presented to evaluate the system components and drive future developments. (C) 2010 Elsevier B.V. All rights reserved.

  9. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  10. Effect of Heterogeneity in Initial Geographic Distribution on Opinions’ Competitiveness

    Directory of Open Access Journals (Sweden)

    Alexander S. Balankin

    2015-05-01

    Full Text Available Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly, in this work, we studied the effect of spatial heterogeneity on the majority rule dynamics using a three-state spin model, in which one state is neutral. Monte Carlo simulations were performed on square lattices divided into square blocks (cells. Accordingly, one competing opinion was distributed uniformly among cells, whereas the spatial distribution of the rival opinion was varied from the uniform to heterogeneous, with the median-to-mean ratio in the range from 1 to 0. When the size of discussion group is odd, the uncommitted agents disappear completely after  3.30 ± 0.05 update cycles, and then the system evolves in a two-state regime with complementary spatial distributions of two competing opinions. Even so, the initial heterogeneity in the spatial distribution of one of the competing opinions causes a decrease of this opinion competitiveness. That is, the opinion with initially heterogeneous spatial distribution has less probability to win, than the opinion with the initially uniform spatial distribution, even when the initial concentrations of both opinions are equal. We found that although the time to consensus , the opinion’s recession rate is determined during the first 3.3 update cycles. On the other hand, we found that the initial heterogeneity of the opinion spatial distribution assists the formation of quasi-stable regions, in which this opinion is dominant. The results of Monte Carlo simulations are discussed with regard to the electoral competition of political parties.

  11. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  12. Heterogeneous Distribution of Chromium on Mercury

    Science.gov (United States)

    Nittler, L. R.; Boujibar, A.; Crapster-Pregont, E.; Frank, E. A.; McCoy, T. J.; McCubbin, F. M.; Starr, R. D.; Vander Kaaden, K. E.; Vorburger, A.; Weider, S. Z.

    2018-05-01

    Mercury's surface has an average Cr/Si ratio of 0.003 (Cr 800 ppm), with at least a factor of 2 systematic uncertainty. Cr is heterogeneously distributed and correlated with Mg, Ca, S, and Fe and anti-correlated with Al.

  13. Heterogeneous game resource distributions promote cooperation in spatial prisoner's dilemma game

    Science.gov (United States)

    Cui, Guang-Hai; Wang, Zhen; Yang, Yan-Cun; Tian, Sheng-Wen; Yue, Jun

    2018-01-01

    In social networks, individual abilities to establish interactions are always heterogeneous and independent of the number of topological neighbors. We here study the influence of heterogeneous distributions of abilities on the evolution of individual cooperation in the spatial prisoner's dilemma game. First, we introduced a prisoner's dilemma game, taking into account individual heterogeneous abilities to establish games, which are determined by the owned game resources. Second, we studied three types of game resource distributions that follow the power-law property. Simulation results show that the heterogeneous distribution of individual game resources can promote cooperation effectively, and the heterogeneous level of resource distributions has a positive influence on the maintenance of cooperation. Extensive analysis shows that cooperators with large resource capacities can foster cooperator clusters around themselves. Furthermore, when the temptation to defect is high, cooperator clusters in which the central pure cooperators have larger game resource capacities are more stable than other cooperator clusters.

  14. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  15. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  18. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  19. A distributed scheduling algorithm for heterogeneous real-time systems

    Science.gov (United States)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  20. Integrating CLIPS applications into heterogeneous distributed systems

    Science.gov (United States)

    Adler, Richard M.

    1991-01-01

    SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

  1. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  2. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    OpenAIRE

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2012-01-01

    While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to ach...

  3. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  4. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  5. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  6. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  7. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  8. Cellular- and micro-dosimetry of heterogeneously distributed tritium.

    Science.gov (United States)

    Chao, Tsi-Chian; Wang, Chun-Ching; Li, Junli; Li, Chunyan; Tung, Chuan-Jong

    2012-01-01

    The assessment of radiotoxicity for heterogeneously distributed tritium should be based on the subcellular dose and relative biological effectiveness (RBE) for cell nucleus. In the present work, geometry-dependent absorbed dose and RBE were calculated using Monte Carlo codes for tritium in the cell, cell surface, cytoplasm, or cell nucleus. Penelope (PENetration and Energy LOss of Positrins and Electrons) code was used to calculate the geometry-dependent absorbed dose, lineal energy, and electron fluence spectrum. RBE for the intestinal crypt regeneration was calculated using a lineal energy-dependent biological weighting function. RBE for the induction of DNA double strand breaks was estimated using a nucleotide-level map for clustered DNA lesions of the Monte Carlo damage simulation (MCDS) code. For a typical cell of 10 μm radius and 5 μm nuclear radius, tritium in the cell nucleus resulted in much higher RBE-weighted absorbed dose than tritium distributed uniformly. Conversely, tritium distributed on the cell surface led to trivial RBE-weighted absorbed dose due to irradiation geometry and great attenuation of beta particles in the cytoplasm. For tritium uniformly distributed in the cell, the RBE-weighted absorbed dose was larger compared to tritium uniformly distributed in the tissue. Cellular- and micro-dosimetry models were developed for the assessment of heterogeneously distributed tritium.

  9. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  10. Heterogeneity of D-Serine Distribution in the Human Central Nervous System

    Science.gov (United States)

    Suzuki, Masataka; Imanishi, Nobuaki; Mita, Masashi; Hamase, Kenji; Aiso, Sadakazu

    2017-01-01

    D-serine is an endogenous ligand for N-methyl-D-aspartate glutamate receptors. Accumulating evidence including genetic associations of D-serine metabolism with neurological or psychiatric diseases suggest that D-serine is crucial in human neurophysiology. However, distribution and regulation of D-serine in humans are not well understood. Here, we found that D-serine is heterogeneously distributed in the human central nervous system (CNS). The cerebrum contains the highest level of D-serine among the areas in the CNS. There is heterogeneity in its distribution in the cerebrum and even within the cerebral neocortex. The neocortical heterogeneity is associated with Brodmann or functional areas but is unrelated to basic patterns of cortical layer structure or regional expressional variation of metabolic enzymes for D-serine. Such D-serine distribution may reflect functional diversity of glutamatergic neurons in the human CNS, which may serve as a basis for clinical and pharmacological studies on D-serine modulation. PMID:28604057

  11. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  12. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  13. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    Science.gov (United States)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  14. Content-Agnostic Malware Detection in Heterogeneous Malicious Distribution Graph

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-10-26

    Malware detection has been widely studied by analysing either file dropping relationships or characteristics of the file distribution network. This paper, for the first time, studies a global heterogeneous malware delivery graph fusing file dropping relationship and the topology of the file distribution network. The integration offers a unique ability of structuring the end-to-end distribution relationship. However, it brings large heterogeneous graphs to analysis. In our study, an average daily generated graph has more than 4 million edges and 2.7 million nodes that differ in type, such as IPs, URLs, and files. We propose a novel Bayesian label propagation model to unify the multi-source information, including content-agnostic features of different node types and topological information of the heterogeneous network. Our approach does not need to examine the source codes nor inspect the dynamic behaviours of a binary. Instead, it estimates the maliciousness of a given file through a semi-supervised label propagation procedure, which has a linear time complexity w.r.t. the number of nodes and edges. The evaluation on 567 million real-world download events validates that our proposed approach efficiently detects malware with a high accuracy. © 2016 Copyright held by the owner/author(s).

  15. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  16. Obtaining contaminant arrival distributions for steady flow in heterogeneous systems

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The versatility of the new contaminant arrival distributions for determining environmental consequences of subsurface pollution problems is demonstrated through application to a field example involving land drainage in heterogeneous porous materials. Though the four phases of the hydrologic evaluations are complicated because of the material heterogeneity encountered in the field problem, the arrival distributions still effectively summarize the minimal amount of data required to determine the environmental implications. These arrival distributions yield a single graph or tabular set of data giving the consequences of the subsurface pollution problems. Accordingly, public control authorities would be well advised to request that the results of subsurface pollution investigations be provided in the form of arrival distributions and the resulting simpler summary curve or tabulation. Such an objective is most easily accomplished through compliance with the requirements for assuring a complete subsurface evaluation

  17. A MODEL OF HETEROGENEOUS DISTRIBUTED SYSTEM FOR FOREIGN EXCHANGE PORTFOLIO ANALYSIS

    Directory of Open Access Journals (Sweden)

    Dragutin Kermek

    2006-06-01

    Full Text Available The paper investigates the design of heterogeneous distributed system for foreign exchange portfolio analysis. The proposed model includes few separated and dislocated but connected parts through distributed mechanisms. Making system distributed brings new perspectives to performance busting where software based load balancer gets very important role. Desired system should spread over multiple, heterogeneous platforms in order to fulfil open platform goal. Building such a model incorporates different patterns from GOF design patterns, business patterns, J2EE patterns, integration patterns, enterprise patterns, distributed design patterns to Web services patterns. The authors try to find as much as possible appropriate patterns for planned tasks in order to capture best modelling and programming practices.

  18. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  19. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  20. Heterogeneous ice slurry flow and concentration distribution in horizontal pipes

    International Nuclear Information System (INIS)

    Wang, Jihong; Zhang, Tengfei; Wang, Shugang

    2013-01-01

    Highlights: • A Mixture CFD model is applied to describe heterogeneous ice slurry flow. • The ice slurry rheological behavior is considered piecewise. • The coupled flow and concentration profiles in heterogeneous slurry flow is acquired. • The current numerical model achieves good balance between precision and universality. -- Abstract: Ice slurry is an energy-intensive solid–liquid mixture fluid which may play an important role in various cooling purposes. Knowing detailed flow information is important from the system design point of view. However, the heterogeneous ice slurry flow makes it difficult to be quantified due to the complex two phase flow characteristic. The present study applies a Mixture computational fluid dynamics (CFD) model based on different rheological behavior to characterize the heterogeneous ice slurry flow. The Mixture CFD model was firstly validated by three different experiments. Then the validated Mixture CFD model was applied to solve the ice slurry isothermal flow by considering the rheological behavior piecewise. Finally, the numerical solutions have displayed the coupled flow information, such as slurry velocity, ice particle concentration and pressure drop distribution. The results show that, the ice slurry flow distribution will appear varying degree of asymmetry under different operating conditions. The rheological behavior will be affected by the asymmetric flow distributions. When mean flow velocity is high, Thomas equation can be appropriate for describing ice slurry viscosity. While with the decreasing of mean flow velocity, the ice slurry behaves Bingham rheology. As compared with experimental pressure drop results, the relative errors of numerical computation are almost within ±15%. The Mixture CFD model is validated to be an effective model for describing heterogeneous ice slurry flow and could supply plentiful flow information

  1. Electrical resistivity sounding to study water content distribution in heterogeneous soils

    Science.gov (United States)

    Electrical resistivity (ER) sounding is increasingly being used as non-invasive technique to reveal and map soil heterogeneity. The objective of this work was to assess ER sounding applicability to study soil water distribution in spatially heterogeneous soils. The 30x30-m study plot was located at ...

  2. ADAPTIVE DISTRIBUTION OF A SWARM OF HETEROGENEOUS ROBOTS

    Directory of Open Access Journals (Sweden)

    Amanda Prorok

    2016-02-01

    Full Text Available We present a method that distributes a swarm of heterogeneous robots among a set of tasks that require specialized capabilities in order to be completed. We model the system of heterogeneous robots as a community of species, where each species (robot type is defined by the traits (capabilities that it owns. Our method is based on a continuous abstraction of the swarm at a macroscopic level as we model robots switching between tasks. We formulate an optimization problem that produces an optimal set of transition rates for each species, so that the desired trait distribution is reached as quickly as possible. Since our method is based on the derivation of an analytical gradient, it is very efficient with respect to state-of-the-art methods. Building on this result, we propose a real-time optimization method that enables an online adaptation of transition rates. Our approach is well-suited for real-time applications that rely on online redistribution of large-scale robotic systems.

  3. Generative Adversarial Networks Based Heterogeneous Data Integration and Its Application for Intelligent Power Distribution and Utilization

    Directory of Open Access Journals (Sweden)

    Yuanpeng Tan

    2018-01-01

    Full Text Available Heterogeneous characteristics of a big data system for intelligent power distribution and utilization have already become more and more prominent, which brings new challenges for the traditional data analysis technologies and restricts the comprehensive management of distribution network assets. In order to solve the problem that heterogeneous data resources of power distribution systems are difficult to be effectively utilized, a novel generative adversarial networks (GANs based heterogeneous data integration method for intelligent power distribution and utilization is proposed. In the proposed method, GANs theory is introduced to expand the distribution of completed data samples. Then, a so-called peak clustering algorithm is proposed to realize the finite open coverage of the expanded sample space, and repair those incomplete samples to eliminate the heterogeneous characteristics. Finally, in order to realize the integration of the heterogeneous data for intelligent power distribution and utilization, the well-trained discriminator model of GANs is employed to check the restored data samples. The simulation experiments verified the validity and stability of the proposed heterogeneous data integration method, which provides a novel perspective for the further data quality management of power distribution systems.

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  5. Observed and unobserved heterogeneity in stochastic frontier models: An application to the electricity distribution industry

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2011-01-01

    In this study we combine different possibilities to model firm level heterogeneity in stochastic frontier analysis. We show that both observed and unobserved heterogeneities cause serious biases in inefficiency results. Modelling observed and unobserved heterogeneities treat individual firms in different ways and even though the expected mean inefficiency scores in both cases diminish the firm level efficiency rank orders turn out to be very different. The best fit with the data is obtained by modelling unobserved heterogeneity through randomizing frontier parameters and at the same time explicitly modelling the observed heterogeneity into the inefficiency distribution. These results are obtained by using data from Finnish electricity distribution utilities and the results are relevant in relation to electricity distribution pricing and regulation. -- Research Highlights: → We show that both observed and unobserved heterogeneities of firms cause biases in inefficiency results. → Different ways of accounting firm level heterogeneity end up with very different rank orders of firms. → The model which combines the characteristics of unobserved and observed heterogeneity fits the data best.

  6. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  7. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    International Nuclear Information System (INIS)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-01-01

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  8. Distribution of model-based multipoint heterogeneity lod scores.

    Science.gov (United States)

    Xing, Chao; Morris, Nathan; Xing, Guan

    2010-12-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.

  9. Effect of heterogeneous microvasculature distribution on drug delivery to solid tumour

    International Nuclear Information System (INIS)

    Zhan, Wenbo; Xu, Xiao Yun; Gedroyc, Wladyslaw

    2014-01-01

    Most of the computational models of drug transport in vascular tumours assume a uniform distribution of blood vessels through which anti-cancer drugs are delivered. However, it is well known that solid tumours are characterized by dilated microvasculature with non-uniform diameters and irregular branching patterns. In this study, the effect of heterogeneous vasculature on drug transport and uptake is investigated by means of mathematical modelling of the key physical and biochemical processes in drug delivery. An anatomically realistic tumour model accounting for heterogeneous distribution of blood vessels is reconstructed based on magnetic resonance images of a liver tumour. Numerical simulations are performed for different drug delivery modes, including direct continuous infusion and thermosensitive liposome-mediated delivery, and the anti-cancer effectiveness is evaluated through changes in tumour cell density based on predicted intracellular concentrations. Comparisons are made between regions of different vascular density, and between the two drug delivery modes. Our numerical results show that both extra- and intra-cellular concentrations in the liver tumour are non-uniform owing to the heterogeneous distribution of tumour vasculature. Drugs accumulate faster in well-vascularized regions, where they are also cleared out more quickly, resulting in less effective tumour cell killing in these regions. Compared with direct continuous infusion, the influence of heterogeneous vasculature on anti-cancer effectiveness is more pronounced for thermosensitive liposome-mediated delivery. (paper)

  10. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  11. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  12. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    Science.gov (United States)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  13. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  14. Overview of the Benefits and Costs og Integrating Heterogeneous Applications by Using Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars

    2012-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). In practice, it is not possible to implement the ACID properties if heterogeneous or distributed databases ...

  15. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  16. P-wave scattering and the distribution of heterogeneity around Etna volcano

    Directory of Open Access Journals (Sweden)

    Toni Zieger

    2016-09-01

    Full Text Available Volcanoes and fault zones are areas of increased heterogeneity in the Earth crust that leads to strong scattering of seismic waves. For the understanding of the volcanic structure and the role of attenuation and scattering processes it is important to investigate the distribution of heterogeneity. We used the signals of air-gun shots to investigate the distribution of heterogeneity around Mount Etna. We devise a new methodology that is based on the coda energy ratio which we define as the ratio between the energy of the direct P-wave and the energy in a later coda window. This is based on the basic assumption that scattering caused by heterogeneity removes energy from the direct P-waves. We show that measurements of the energy ratio are stable with respect to changes of the details of the time windows definitions. As an independent proxy of the scattering strength along the ray path we measure the peak delay time of the direct P-wave. The peak delay time is well correlated with the coda energy ratio. We project the observation in the directions of the incident rays at the stations. Most notably is an area with increased wave scattering in the volcano and east of it. The strong heterogeneity found supports earlier observations and confirms the possibility to use P-wave sources for the determination of scattering properties. We interpret the extension of the highly heterogeneous zone towards the east as a potential signature of inelastic deformation processes induced by the eastward sliding of flank of the volcano.

  17. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece); Loudos, George [Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [Medical Information Processing Laboratory (LaTIM), National Institute of Health and Medical Research (INSERM), 29609 Brest (France)

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines

  18. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  19. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Directory of Open Access Journals (Sweden)

    Seok-Hyoung Lee

    2012-06-01

    Full Text Available While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to achieve interoperability of the information and thus not easy to implement meaningful science technology information services through information convergence. This study aims to address the aforementioned issue by analyzing mapping systems between classification systems in order to design a structure to connect a variety of classification systems used in the academic information database of the Korea Institute of Science and Technology Information, which provides science and technology information portal service. This study also aims to design a mapping system for the classification systems to be applied to actual science and technology information services and information management systems.

  20. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  1. TCP isoeffect analysis using a heterogeneous distribution of radiosensitivity

    International Nuclear Information System (INIS)

    Carlone, Marco; Wilkins, David; Nyiri, Balazs; Raaphorst, Peter

    2004-01-01

    A formula for the α/β ratio is derived using the heterogeneous (population averaged) tumor control model. This formula is nearly identical to the formula obtained using the homogeneous (individual) tumor control model, but the new formula includes extra terms showing that the α/β ratio, the ratio of the mean value of α divided by the mean value of β that would be observed in a patient population, explicitly depends on the survival level and heterogeneity. The magnitude of this correction is estimated for prostate cancer, and this appears to raise the mean value of the ratio estimate by about 20%. The method also allows investigation of confidence limits for α/β based on a population distribution of radiosensitivity. For a widely heterogeneous population, the upper 95% confidence interval for the α/β ratio can be as high as 7.3 Gy, even though the population mean is between 2.3 and 2.6 Gy

  2. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    Science.gov (United States)

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  3. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  5. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  6. Database interfaces on NASA's heterogeneous distributed database system

    Science.gov (United States)

    Huang, Shou-Hsuan Stephen

    1989-01-01

    The syntax and semantics of all commands used in the template are described. Template builders should consult this document for proper commands in the template. Previous documents (Semiannual reports) described other aspects of this project. Appendix 1 contains all substituting commands used in the system. Appendix 2 includes all repeating commands. Appendix 3 is a collection of DEFINE templates from eight different DBMS's.

  7. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  8. Computer simulation of the interplay between fractal structures and surrounding heterogeneous multifractal distributions. Applications

    OpenAIRE

    Martin Martin, Miguel Angel; Reyes Castro, Miguel E.; Taguas Coejo, Fco. Javier

    2014-01-01

    In a large number of physical, biological and environmental processes interfaces with high irregular geometry appear separating media (phases) in which the heterogeneity of constituents is present. In this work the quantification of the interplay between irregular structures and surrounding heterogeneous distributions in the plane is made For a geometric set image and a mass distribution (measure) image supported in image, being image, the mass image gives account of the interplay between th...

  9. Effects of heterogeneous wealth distribution on public cooperation with collective risk

    Science.gov (United States)

    Wang, Jing; Fu, Feng; Wang, Long

    2010-07-01

    The distribution of wealth among individuals in real society can be well described by the Pareto principle or “80-20 rule.” How does such heterogeneity in initial wealth distribution affect the emergence of public cooperation, when individuals, the rich and the poor, engage in a collective-risk enterprise, not to gain a profit but to avoid a potential loss? Here we address this issue by studying a simple but effective model based on threshold public goods games. We analyze the evolutionary dynamics for two distinct scenarios, respectively: one with fair sharers versus defectors and the other with altruists versus defectors. For both scenarios, particularly, we in detail study the dynamics of the population with dichotomic initial wealth—the rich versus the poor. Moreover, we demonstrate the possible steady compositions of the population and provide the conditions for stability of these steady states. We prove that in a population with heterogeneous wealth distribution, richer individuals are more likely to cooperate than poorer ones. Participants with lower initial wealth may choose to cooperate only if all players richer than them are cooperators. The emergence of pubic cooperation largely relies on rich individuals. Furthermore, whenever the wealth gap between the rich and the poor is sufficiently large, cooperation of a few rich individuals can substantially elevate the overall level of social cooperation, which is in line with the well-known Pareto principle. Our work may offer an insight into the emergence of cooperative behavior in real social situations where heterogeneous distribution of wealth among individual is omnipresent.

  10. Effects of heterogeneous wealth distribution on public cooperation with collective risk.

    Science.gov (United States)

    Wang, Jing; Fu, Feng; Wang, Long

    2010-07-01

    The distribution of wealth among individuals in real society can be well described by the Pareto principle or "80-20 rule." How does such heterogeneity in initial wealth distribution affect the emergence of public cooperation, when individuals, the rich and the poor, engage in a collective-risk enterprise, not to gain a profit but to avoid a potential loss? Here we address this issue by studying a simple but effective model based on threshold public goods games. We analyze the evolutionary dynamics for two distinct scenarios, respectively: one with fair sharers versus defectors and the other with altruists versus defectors. For both scenarios, particularly, we in detail study the dynamics of the population with dichotomic initial wealth-the rich versus the poor. Moreover, we demonstrate the possible steady compositions of the population and provide the conditions for stability of these steady states. We prove that in a population with heterogeneous wealth distribution, richer individuals are more likely to cooperate than poorer ones. Participants with lower initial wealth may choose to cooperate only if all players richer than them are cooperators. The emergence of pubic cooperation largely relies on rich individuals. Furthermore, whenever the wealth gap between the rich and the poor is sufficiently large, cooperation of a few rich individuals can substantially elevate the overall level of social cooperation, which is in line with the well-known Pareto principle. Our work may offer an insight into the emergence of cooperative behavior in real social situations where heterogeneous distribution of wealth among individual is omnipresent.

  11. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  12. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  13. Heterogeneity phantoms for visualization of 3D dose distributions by MRI-based polymer gel dosimetry

    International Nuclear Information System (INIS)

    Watanabe, Yoichi; Mooij, Rob; Mark Perera, G.; Maryanski, Marek J.

    2004-01-01

    Heterogeneity corrections in dose calculations are necessary for radiation therapy treatment plans. Dosimetric measurements of the heterogeneity effects are hampered if the detectors are large and their radiological characteristics are not equivalent to water. Gel dosimetry can solve these problems. Furthermore, it provides three-dimensional (3D) dose distributions. We used a cylindrical phantom filled with BANG-3 registered polymer gel to measure 3D dose distributions in heterogeneous media. The phantom has a cavity, in which water-equivalent or bone-like solid blocks can be inserted. The irradiated phantom was scanned with an magnetic resonance imaging (MRI) scanner. Dose distributions were obtained by calibrating the polymer gel for a relationship between the absorbed dose and the spin-spin relaxation rate of the magnetic resistance (MR) signal. To study dose distributions we had to analyze MR imaging artifacts. This was done in three ways: comparison of a measured dose distribution in a simulated homogeneous phantom with a reference dose distribution, comparison of a sagittally scanned image with a sagittal image reconstructed from axially scanned data, and coregistration of MR and computed-tomography images. We found that the MRI artifacts cause a geometrical distortion of less than 2 mm and less than 10% change in the dose around solid inserts. With these limitations in mind we could make some qualitative measurements. Particularly we observed clear differences between the measured dose distributions around an air-gap and around bone-like material for a 6 MV photon beam. In conclusion, the gel dosimetry has the potential to qualitatively characterize the dose distributions near heterogeneities in 3D

  14. Income distribution patterns from a complete social security database

    Science.gov (United States)

    Derzsy, N.; Néda, Z.; Santos, M. A.

    2012-11-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.

  15. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  16. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  17. Heterogeneous distribution of prokaryotes and viruses at the microscale in a tidal sediment

    DEFF Research Database (Denmark)

    Carreira, Cátia; Larsen, Morten; Glud, Ronnie

    2013-01-01

    In this study we show for the first time the microscale (mm) 2- and 3-dimensional spatial distribution and abundance of prokaryotes, viruses, and oxygen in a tidal sediment. Prokaryotes and viruses were highly heterogeneously distributed with patches of elevated abundances surrounded by areas of ...

  18. An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network

    OpenAIRE

    Amit Chougule; Pravin Ghewari

    2011-01-01

    With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...

  19. Management of Distributed and Extendible Heterogeneous Radio Architectures

    DEFF Research Database (Denmark)

    Ramkumar, Venkata; Mihovska, Albena D.; Prasad, Neeli R.

    2009-01-01

    Wireless communication systems are dynamic by nature, which comes from several factors, namely: radio propagation impairments, traffic changes, interference conditions, user mobility, etc. In a heterogeneous environment, , the dynamic network behavior calls for a dynamic management of the radio...... resources; a process that associates a large number of parameters and quality/performance indicators that need to be set, measured, analyzed, and optimized. Radio-over-fiber (RoF) technology involves the use of optical fiber links to distribute radio frequency (RF) signals from a central location to remote...

  20. CT Identification and Fractal Characterization of 3-D Propagation and Distribution of Hydrofracturing Cracks in Low-Permeability Heterogeneous Rocks

    Science.gov (United States)

    Liu, Peng; Ju, Yang; Gao, Feng; Ranjith, Pathegama G.; Zhang, Qianbing

    2018-03-01

    Understanding and characterization of the three-dimensional (3-D) propagation and distribution of hydrofracturing cracks in heterogeneous rock are key for enhancing the stimulation of low-permeability petroleum reservoirs. In this study, we investigated the propagation and distribution characteristics of hydrofracturing cracks, by conducting true triaxial hydrofracturing tests and computed tomography on artificial heterogeneous rock specimens. Silica sand, Portland cement, and aedelforsite were mixed to create artificial heterogeneous rock specimens using the data of mineral compositions, coarse gravel distribution, and mechanical properties that were measured from the natural heterogeneous glutenite cores. To probe the effects of material heterogeneity on hydrofracturing cracks, the artificial homogenous specimens were created using the identical matrix compositions of the heterogeneous rock specimens and then fractured for comparison. The effects of horizontal geostress ratio on the 3-D growth and distribution of cracks during hydrofracturing were examined. A fractal-based method was proposed to characterize the complexity of fractures and the efficiency of hydrofracturing stimulation of heterogeneous media. The material heterogeneity and horizontal geostress ratio were found to significantly influence the 3-D morphology, growth, and distribution of hydrofracturing cracks. A horizontal geostress ratio of 1.7 appears to be the upper limit for the occurrence of multiple cracks, and higher ratios cause a single crack perpendicular to the minimum horizontal geostress component. The fracturing efficiency is associated with not only the fractured volume but also the complexity of the crack network.

  1. Effects of Fiber Type and Size on the Heterogeneity of Oxygen Distribution in Exercising Skeletal Muscle

    Science.gov (United States)

    Liu, Gang; Mac Gabhann, Feilim; Popel, Aleksander S.

    2012-01-01

    The process of oxygen delivery from capillary to muscle fiber is essential for a tissue with variable oxygen demand, such as skeletal muscle. Oxygen distribution in exercising skeletal muscle is regulated by convective oxygen transport in the blood vessels, oxygen diffusion and consumption in the tissue. Spatial heterogeneities in oxygen supply, such as microvascular architecture and hemodynamic variables, had been observed experimentally and their marked effects on oxygen exchange had been confirmed using mathematical models. In this study, we investigate the effects of heterogeneities in oxygen demand on tissue oxygenation distribution using a multiscale oxygen transport model. Muscles are composed of different ratios of the various fiber types. Each fiber type has characteristic values of several parameters, including fiber size, oxygen consumption, myoglobin concentration, and oxygen diffusivity. Using experimentally measured parameters for different fiber types and applying them to the rat extensor digitorum longus muscle, we evaluated the effects of heterogeneous fiber size and fiber type properties on the oxygen distribution profile. Our simulation results suggest a marked increase in spatial heterogeneity of oxygen due to fiber size distribution in a mixed muscle. Our simulations also suggest that the combined effects of fiber type properties, except size, do not contribute significantly to the tissue oxygen spatial heterogeneity. However, the incorporation of the difference in oxygen consumption rates of different fiber types alone causes higher oxygen heterogeneity compared to control cases with uniform fiber properties. In contrast, incorporating variation in other fiber type-specific properties, such as myoglobin concentration, causes little change in spatial tissue oxygenation profiles. PMID:23028531

  2. Effects of species biological traits and environmental heterogeneity on simulated tree species distribution shifts under climate change.

    Science.gov (United States)

    Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S

    2018-09-01

    Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  4. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  5. Long-term spatial heterogeneity in mallard distribution in the Prairie pothole region

    Science.gov (United States)

    Janke, Adam K.; Anteau, Michael J.; Stafford, Joshua D.

    2017-01-01

    The Prairie Pothole Region (PPR) of north-central United States and south-central Canada supports greater than half of all breeding mallards (Anas platyrhynchos) annually counted in North America and is the focus of widespread conservation and research efforts. Allocation of conservation resources for this socioeconomically important population would benefit from an understanding of the nature of spatiotemporal variation in distribution of breeding mallards throughout the 850,000 km2 landscape. We used mallard counts from the Waterfowl Breeding Population and Habitat Survey to test for spatial heterogeneity and identify high- and low-abundance regions of breeding mallards over a 50-year time series. We found strong annual spatial heterogeneity in all years: 90% of mallards counted annually were on an average of only 15% of surveyed segments. Using a local indicator of spatial autocorrelation, we found a relatively static distribution of low-count clusters in northern Montana, USA, and southern Alberta, Canada, and a dynamic distribution of high-count clusters throughout the study period. Distribution of high-count clusters shifted southeast from northwestern portions of the PPR in Alberta and western Saskatchewan, Canada, to North and South Dakota, USA, during the latter half of the study period. This spatial redistribution of core mallard breeding populations was likely driven by interactions between environmental variation that created favorable hydrological conditions for wetlands in the eastern PPR and dynamic land-use patterns related to upland cropping practices and government land-retirement programs. Our results highlight an opportunity for prioritizing relatively small regions within the PPR for allocation of wetland and grassland conservation for mallard populations. However, the extensive spatial heterogeneity in core distributions over our study period suggests such spatial prioritization will have to overcome challenges presented by dynamic land

  6. The Power of Heterogeneity: Parameter Relationships from Distributions

    Science.gov (United States)

    Röding, Magnus; Bradley, Siobhan J.; Williamson, Nathan H.; Dewi, Melissa R.; Nann, Thomas; Nydén, Magnus

    2016-01-01

    Complex scientific data is becoming the norm, many disciplines are growing immensely data-rich, and higher-dimensional measurements are performed to resolve complex relationships between parameters. Inherently multi-dimensional measurements can directly provide information on both the distributions of individual parameters and the relationships between them, such as in nuclear magnetic resonance and optical spectroscopy. However, when data originates from different measurements and comes in different forms, resolving parameter relationships is a matter of data analysis rather than experiment. We present a method for resolving relationships between parameters that are distributed individually and also correlated. In two case studies, we model the relationships between diameter and luminescence properties of quantum dots and the relationship between molecular weight and diffusion coefficient for polymers. Although it is expected that resolving complicated correlated relationships require inherently multi-dimensional measurements, our method constitutes a useful contribution to the modelling of quantitative relationships between correlated parameters and measurements. We emphasise the general applicability of the method in fields where heterogeneity and complex distributions of parameters are obstacles to scientific insight. PMID:27182701

  7. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  8. Breast dose in mammography is about 30% lower when realistic heterogeneous glandular distributions are considered

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, Andrew M., E-mail: amhern@ucdavis.edu [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Seibert, J. Anthony; Boone, John M. [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)

    2015-11-15

    Purpose: Current dosimetry methods in mammography assume that the breast is comprised of a homogeneous mixture of glandular and adipose tissues. Three-dimensional (3D) dedicated breast CT (bCT) data sets were used previously to assess the complex anatomical structure within the breast, characterizing the statistical distribution of glandular tissue in the breast. The purpose of this work was to investigate the effect of bCT-derived heterogeneous glandular distributions on dosimetry in mammography. Methods: bCT-derived breast diameters, volumes, and 3D fibroglandular distributions were used to design realistic compressed breast models comprised of heterogeneous distributions of glandular tissue. The bCT-derived glandular distributions were fit to biGaussian functions and used as probability density maps to assign the density distributions within compressed breast models. The MCNPX 2.6.0 Monte Carlo code was used to estimate monoenergetic normalized mean glandular dose “DgN(E)” values in mammography geometry. The DgN(E) values were then weighted by typical mammography x-ray spectra to determine polyenergetic DgN (pDgN) coefficients for heterogeneous (pDgN{sub hetero}) and homogeneous (pDgN{sub homo}) cases. The dependence of estimated pDgN values on phantom size, volumetric glandular fraction (VGF), x-ray technique factors, and location of the heterogeneous glandular distributions was investigated. Results: The pDgN{sub hetero} coefficients were on average 35.3% (SD, 4.1) and 24.2% (SD, 3.0) lower than the pDgN{sub homo} coefficients for the Mo–Mo and W–Rh x-ray spectra, respectively, across all phantom sizes and VGFs when the glandular distributions were centered within the breast phantom in the coronal plane. At constant breast size, increasing VGF from 7.3% to 19.1% lead to a reduction in pDgN{sub hetero} relative to pDgN{sub homo} of 23.6%–27.4% for a W–Rh spectrum. Displacement of the glandular distribution, at a distance equal to 10% of the

  9. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...

  10. Calculation of breaking radiation dose fields in heterogenous media by a method of the transformation of axial distribution

    International Nuclear Information System (INIS)

    Mil'shtejn, R.S.

    1988-01-01

    Analysis of dose fields in a heterogeneous tissue equivalent medium has shown that dose distributions have radial symmetry and can be described by a curve of axial distribution with renormalization of maximum ionization depth. A method of the calculation of a dose field in a heterogeneous medium using the principle of radial symmetry is presented

  11. Sales Comparison Approach Indicating Heterogeneity of Particular Type of Real Estate and Corresponding Valuation Accuracy

    Directory of Open Access Journals (Sweden)

    Martin Cupal

    2017-01-01

    Full Text Available The article focuses on heterogeneity of goods, namely real estate and consequently deals with market valuation accuracy. The heterogeneity of real estate property is, in particular, that every unit is unique in terms of its construction, condition, financing and mainly location and thus assessing the value must necessarily be difficult. This research also indicates the rate of efficiency of markets across the types based on their level of variability. The research is based on two databases consisting of various types of real estate with specific market parameters. These parameters determine the differences across the types and reveal heterogeneity. The first database has been set on valuations by sales comparison approach and the second one on data of real properties offered on the market. The methodology is based on univariate and multivariate statistics of key variables of those databases. The multivariate analysis is performed by Hotelling T2 control chart and statistics with appropriate numerical characteristics. The results of both databases were joint by weights with regard to the dependence criterion of the variables. The final results indicate potential valuation accuracy across the types. The main contribution of the research is that the evaluation was not only derived from the price deviation or distribution, but it also draws from causes of real property heterogeneity as a whole.

  12. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  13. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  14. Coordinated Collaboration between Heterogeneous Distributed Energy Resources

    Directory of Open Access Journals (Sweden)

    Shahin Abdollahy

    2014-01-01

    Full Text Available A power distribution feeder, where a heterogeneous set of distributed energy resources is deployed, is examined by simulation. The energy resources include PV, battery storage, natural gas GenSet, fuel cells, and active thermal storage for commercial buildings. The resource scenario considered is one that may exist in a not too distant future. Two cases of interaction between different resources are examined. One interaction involves a GenSet used to partially offset the duty cycle of a smoothing battery connected to a large PV system. The other example involves the coordination of twenty thermal storage devices, each associated with a commercial building. Storage devices are intended to provide maximum benefit to the building, but it is shown that this can have a deleterious effect on the overall system, unless the action of the individual storage devices is coordinated. A network based approach is also introduced to calculate some type of effectiveness metric to all available resources which take part in coordinated operation. The main finding is that it is possible to achieve synergy between DERs on a system; however this required a unified strategy to coordinate the action of all devices in a decentralized way.

  15. A Database for Decision-Making in Training and Distributed Learning Technology

    National Research Council Canada - National Science Library

    Stouffer, Virginia

    1998-01-01

    .... A framework for incorporating data about distributed learning courseware into the existing training database was devised and a plan for a national electronic courseware redistribution network was recommended...

  16. The response time distribution in a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1996-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  17. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  18. Application of new type of distributed multimedia databases to networked electronic museum

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  19. Measurement of heterogeneous distribution on technegas SPECT images by three-dimensional fractal analysis

    International Nuclear Information System (INIS)

    Nagao, Michinobu; Murase, Kenya

    2002-01-01

    This review article describes a method for quantifying heterogeneous distribution on Technegas ( 99m Tc-carbon particle radioaerosol) SPECT images by three-dimensional fractal analysis (3D-FA). Technegas SPECT was performed to quantify the severity of pulmonary emphysema. We delineated the SPECT images by using five cut-offs (15, 20, 25, 30 and 35% of the maximal voxel radioactivity), and measured the total number of voxels in the areas surrounded by the contours obtained with each cut-off level. We calculated fractal dimensions from the relationship between the total number of voxels and the cut-off levels transformed into natural logarithms. The fractal dimension derived from 3D-FA is the relative and objective measurement, which can assess the heterogeneous distribution on Technegas SPECT images. The fractal dimension strongly correlate pulmonary function in patients with emphysema and well documented the overall and regional severity of emphysema. (author)

  20. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  1. Present and future status of distributed database for nuclear materials (Data-Free-Way)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Xu, Yibin; Kaji, Yoshiyuki; Tsukada, Takashi

    2004-01-01

    Data-Free-Way (DFW) is a distributed database for nuclear materials. DFW has been developed by three organizations such as National Institute for Materials Science (NIMS), Japan Atomic Energy Research Institute (JAERI) and Japan Nuclear Cycle Development Institute (JNC) since 1990. Each organization constructs each materials database in the strongest field and the member of three organizations can use these databases by internet. Construction of DFW, stored data, outline of knowledge data system, data manufacturing of knowledge note, activities of three organizations are described. On NIMS, nuclear reaction database for materials are explained. On JAERI, data analysis using IASCC data in JMPD is contained. Main database of JNC is experimental database of coexistence of engineering ceramics in liquid sodium at high temperature' and 'Tensile test database of irradiated 304 stainless steel' and 'Technical information database'. (S.Y.)

  2. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  3. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  4. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  5. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  6. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations.

    Science.gov (United States)

    Khalifa, Tarek; Abdrabou, Atef; Shaban, Khaled; Gaouda, A M

    2018-05-11

    Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.

  7. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations

    Directory of Open Access Journals (Sweden)

    Tarek Khalifa

    2018-05-01

    Full Text Available Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.

  8. Practical private database queries based on a quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Jakobi, Markus; Simon, Christoph; Gisin, Nicolas; Bancal, Jean-Daniel; Branciard, Cyril; Walenta, Nino; Zbinden, Hugo

    2011-01-01

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions in order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.

  9. Long-lived CO/sub 2/ lasers with distributed heterogeneous catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Browne, P G; Smith, A L.S.

    1974-12-11

    In a sealed CO/sub 2/-N/sub 2/-He system with a clean discharge tube the degree of dissociation of the CO/sub 2/ is greater than 80 percent (with no hydrogen present), and laser action cannot be obtained. If Pt is distributed along the discharge tube walls as a discontinuous film it catalyses back-reactions reforming CO/sub 2/. The degree of dissociation is then less than 40 percent, and efficient laser action at 10.6 ..mu.. is obtained. Using such distributed heterogeneous catalysis, a CO/sub 2/-N/sub 2/-He-Xe laser has operated for more than 3000 h. In this system, both H/sub 2/ and D/sub 2/ are undesirable additives because they decrease the excitation rate of the upper laser level. (auth)

  10. Distributed Input and State Estimation Using Local Information in Heterogeneous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dzung Tran

    2017-07-01

    Full Text Available A new distributed input and state estimation architecture is introduced and analyzed for heterogeneous sensor networks. Specifically, nodes of a given sensor network are allowed to have heterogeneous information roles in the sense that a subset of nodes can be active (that is, subject to observations of a process of interest and the rest can be passive (that is, subject to no observation. Both fixed and varying active and passive roles of sensor nodes in the network are investigated. In addition, these nodes are allowed to have non-identical sensor modalities under the common underlying assumption that they have complimentary properties distributed over the sensor network to achieve collective observability. The key feature of our framework is that it utilizes local information not only during the execution of the proposed distributed input and state estimation architecture but also in its design in that global uniform ultimate boundedness of error dynamics is guaranteed once each node satisfies given local stability conditions independent from the graph topology and neighboring information of these nodes. As a special case (e.g., when all nodes are active and a positive real condition is satisfied, the asymptotic stability can be achieved with our algorithm. Several illustrative numerical examples are further provided to demonstrate the efficacy of the proposed architecture.

  11. Effect of Heterogeneity of JSFR Fuel Assemblies to Power Distribution

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Shimazu, Yoichiro; Hibi, Koki; Fujimura, Koji

    2013-01-01

    Conclusion: 1) Strong heterogeneity of JSFR assemblies was successfully calculated by BACH. 2) Verification test of BACH: • Infinite assembly model; • Color set model; • Good agreement with Monte-Carlo results. 3) Core calculations 3 models for inner duct was used; inward model, outward model and homogeneous model. • k eff difference between the inward and out ward model → 0.3%Δk; • ~20% effect on flux and power distributions. Therefore, we have to pay careful attention for the location of inner duct in fuel loading of JSFR

  12. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  13. Dynamic heterogeneity in life histories

    DEFF Research Database (Denmark)

    Tuljapurkar, Shripad; Steiner, Uli; Orzack, Steven Hecht

    2009-01-01

    or no fixed heterogeneity influences this trait. We propose that dynamic heterogeneity provides a 'neutral' model for assessing the possible role of unobserved 'quality' differences between individuals. We discuss fitness for dynamic life histories, and the implications of dynamic heterogeneity...... generate dynamic heterogeneity: life-history differences produced by stochastic stratum dynamics. We characterize dynamic heterogeneity in a range of species across taxa by properties of the Markov chain: the entropy, which describes the extent of heterogeneity, and the subdominant eigenvalue, which...... distributions of lifetime reproductive success. Dynamic heterogeneity contrasts with fixed heterogeneity: unobserved differences that generate variation between life histories. We show by an example that observed distributions of lifetime reproductive success are often consistent with the claim that little...

  14. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  15. Spatiotemporal Distribution of β-Amyloid in Alzheimer Disease Is the Result of Heterogeneous Regional Carrying Capacities.

    Science.gov (United States)

    Whittington, Alex; Sharp, David J; Gunn, Roger N

    2018-05-01

    β-amyloid (Aβ) accumulation in the brain is 1 of 2 pathologic hallmarks of Alzheimer disease (AD), and the spatial distribution of Aβ has been studied extensively ex vivo. Methods: We applied mathematical modeling to Aβ in vivo PET imaging data to investigate competing theories of Aβ spread in AD. Results: Our results provided evidence that Aβ accumulation starts in all brain regions simultaneously and that its spatiotemporal distribution is due to heterogeneous regional carrying capacities (regional maximum possible concentration of Aβ) for the aggregated protein rather than to longer-term spreading from seed regions. Conclusion: The in vivo spatiotemporal distribution of Aβ in AD can be mathematically modeled using a logistic growth model in which the Aβ carrying capacity is heterogeneous across the brain but the exponential growth rate and time of half maximal Aβ concentration are constant. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  16. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    Science.gov (United States)

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  17. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon; Burhan, Muhammad; Shahzad, Muhammad Wakil; Ismail, Azahar Bin

    2017-01-01

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  18. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon

    2017-08-31

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  19. Mesoscale characterization of local property distributions in heterogeneous electrodes

    Science.gov (United States)

    Hsu, Tim; Epting, William K.; Mahbub, Rubayyat; Nuhfer, Noel T.; Bhattacharya, Sudip; Lei, Yinkai; Miller, Herbert M.; Ohodnicki, Paul R.; Gerdes, Kirk R.; Abernathy, Harry W.; Hackett, Gregory A.; Rollett, Anthony D.; De Graef, Marc; Litster, Shawn; Salvador, Paul A.

    2018-05-01

    The performance of electrochemical devices depends on the three-dimensional (3D) distributions of microstructural features in their electrodes. Several mature methods exist to characterize 3D microstructures over the microscale (tens of microns), which are useful in understanding homogeneous electrodes. However, methods that capture mesoscale (hundreds of microns) volumes at appropriate resolution (tens of nm) are lacking, though they are needed to understand more common, less ideal electrodes. Using serial sectioning with a Xe plasma focused ion beam combined with scanning electron microscopy (Xe PFIB-SEM), two commercial solid oxide fuel cell (SOFC) electrodes are reconstructed over volumes of 126 × 73 × 12.5 and 124 × 110 × 8 μm3 with a resolution on the order of ≈ 503 nm3. The mesoscale distributions of microscale structural features are quantified and both microscale and mesoscale inhomogeneities are found. We analyze the origin of inhomogeneity over different length scales by comparing experimental and synthetic microstructures, generated with different particle size distributions, with such synthetic microstructures capturing well the high-frequency heterogeneity. Effective medium theory models indicate that significant mesoscale variations in local electrochemical activity are expected throughout such electrodes. These methods offer improved understanding of the performance of complex electrodes in energy conversion devices.

  20. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  1. Inventory calculations in sediment samples with heterogeneous plutonium activity distribution

    International Nuclear Information System (INIS)

    Eriksson, M.; Dahlgaard, H.

    2002-01-01

    A method to determine the total inventory of a heterogeneously distributed contamination of marine sediments is described. The study site is the Bylot Sound off the Thule Airbase, NW Greenland, where marine sediments became contaminated with plutonium in 1968 after a nuclear weapons accident. The calculation is based on a gamma spectrometric screening of the 241 Am concentration in 450 one-gram aliquots from 6 sediment cores. A Monte Carlo programme then simulates a probable distribution of the activity, and based on that, a total inventory is estimated by integrating a double exponential function. The present data indicate a total inventory around 3.5 kg, which is 7 times higher than earlier estimates (0.5 kg). The difference is partly explained by the inclusion of hot particles in the present calculation. A large uncertainty is connected to this estimate, and it should be regarded as preliminary. (au)

  2. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    Science.gov (United States)

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  3. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  4. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  5. Distributed Circumnavigation Control with Dynamic Spacings for a Heterogeneous Multi-robot System

    OpenAIRE

    Yao, Weijia; Luo, Sha; Lu, Huimin; Xiao, Junhao

    2018-01-01

    Circumnavigation control is useful in real-world applications such as entrapping a hostile target. In this paper, we consider a heterogeneous multi-robot system where robots have different physical properties, such as maximum movement speeds. Instead of equal-spacings, dynamic spacings according to robots' properties, which are termed utilities in this paper, will be more desirable in a scenario such as target entrapment. A distributed circumnavigation control algorithm based on utilities is ...

  6. Site initialization, recovery, and back-up in a distributed database system

    International Nuclear Information System (INIS)

    Attar, R.; Bernstein, P.A.; Goodman, N.

    1982-01-01

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all inter-site synchronization

  7. A dedicated database system for handling multi-level data in systems biology

    OpenAIRE

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Background Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging...

  8. Availability and temporal heterogeneity of water supply affect the vertical distribution and mortality of a belowground herbivore and consequently plant growth.

    Science.gov (United States)

    Tsunoda, Tomonori; Kachi, Naoki; Suzuki, Jun-Ichirou

    2014-01-01

    We examined how the volume and temporal heterogeneity of water supply changed the vertical distribution and mortality of a belowground herbivore, and consequently affected plant biomass. Plantago lanceolata (Plantaginaceae) seedlings were grown at one per pot under different combinations of water volume (large or small volume) and heterogeneity (homogeneous water conditions, watered every day; heterogeneous conditions, watered every 4 days) in the presence or absence of a larva of the belowground herbivorous insect, Anomala cuprea (Coleoptera: Scarabaeidae). The larva was confined in different vertical distributions to top feeding zone (top treatment), middle feeding zone (middle treatment), or bottom feeding zone (bottom treatment); alternatively no larva was introduced (control treatment) or larval movement was not confined (free treatment). Three-way interaction between water volume, heterogeneity, and the herbivore significantly affected plant biomass. With a large water volume, plant biomass was lower in free treatment than in control treatment regardless of heterogeneity. Plant biomass in free treatment was as low as in top treatment. With a small water volume and in free treatment, plant biomass was low (similar to that under top treatment) under homogeneous water conditions but high under heterogeneous ones (similar to that under middle or bottom treatment). Therefore, there was little effect of belowground herbivory on plant growth under heterogeneous water conditions. In other watering regimes, herbivores would be distributed in the shallow soil and reduced root biomass. Herbivore mortality was high with homogeneous application of a large volume or heterogeneous application of a small water volume. Under the large water volume, plant biomass was high in pots in which the herbivore had died. Thus, the combinations of water volume and heterogeneity affected plant growth via the change of a belowground herbivore.

  9. A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments

    Directory of Open Access Journals (Sweden)

    Shen Shen

    2018-02-01

    Full Text Available Due to their strong immersion and real-time interactivity, helmet-mounted virtual reality (VR devices are becoming increasingly popular. Based on these devices, an immersive virtual geographic environment (VGE provides a promising method for research into crowd behavior in an emergency. However, the current cheaper helmet-mounted VR devices are not popular enough, and will continue to coexist with personal computer (PC-based systems for a long time. Therefore, a heterogeneous distributed virtual geographic environment (HDVGE could be a feasible solution to the heterogeneous problems caused by various types of clients, and support the implementation of spatiotemporal crowd behavior experiments with large numbers of concurrent participants. In this study, we developed an HDVGE framework, and put forward a set of design principles to define the similarities between the real world and the VGE. We discussed the HDVGE architecture, and proposed an abstract interaction layer, a protocol-based interaction algorithm, and an adjusted dead reckoning algorithm to solve the heterogeneous distributed problems. We then implemented an HDVGE prototype system focusing on subway fire evacuation experiments. Two types of clients are considered in the system: PC, and all-in-one VR. Finally, we evaluated the performances of the prototype system and the key algorithms. The results showed that in a low-latency local area network (LAN environment, the prototype system can smoothly support 90 concurrent users consisting of PC and all-in-one VR clients. HDVGE provides a feasible solution for studying not only spatiotemporal crowd behaviors in normal conditions, but also evacuation behaviors in emergency conditions such as fires and earthquakes. HDVGE could also serve as a new means of obtaining observational data about individual and group behavior in support of human geography research.

  10. Histologic heterogeneity of triple negative breast cancer: A National Cancer Centre Database analysis.

    Science.gov (United States)

    Mills, Matthew N; Yang, George Q; Oliver, Daniel E; Liveringhouse, Casey L; Ahmed, Kamran A; Orman, Amber G; Laronga, Christine; Hoover, Susan J; Khakpour, Nazanin; Costa, Ricardo L B; Diaz, Roberto

    2018-06-02

    Triple negative breast cancer (TNBC) is an aggressive disease, but recent studies have identified heterogeneity in patient outcomes. However, the utility of histologic subtyping in TNBC has not yet been well-characterised. This study utilises data from the National Cancer Center Database (NCDB) to complete the largest series to date investigating the prognostic importance of histology within TNBC. A total of 729,920 patients (pts) with invasive ductal carcinoma (IDC), metaplastic breast carcinoma (MBC), medullary breast carcinoma (MedBC), adenoid cystic carcinoma (ACC), invasive lobular carcinoma (ILC) or apocrine breast carcinoma (ABC) treated between 2004 and 2012 were identified in the NCDB. Of these, 89,222 pts with TNBC that received surgery were analysed. Kaplan-Meier analysis, log-rank testing and multivariate Cox proportional hazards regression were utilised with overall survival (OS) as the primary outcome. MBC (74.1%), MedBC (60.6%), ACC (75.7%), ABC (50.1%) and ILC (1.8%) had significantly different proportions of triple negativity when compared to IDC (14.0%, p < 0.001). TNBC predicted an inferior OS in IDC (p < 0.001) and ILC (p < 0.001). Lumpectomy and radiation (RT) were more common in MedBC (51.7%) and ACC (51.5%) and less common in MBC (33.1%) and ILC (25.4%), when compared to IDC (42.5%, p < 0.001). TNBC patients with MBC (HR 1.39, p < 0.001), MedBC (HR 0.42, p < 0.001) and ACC (HR 0.32, p = 0.003) differed significantly in OS when compared to IDC. Our results indicate that histologic heterogeneity in TNBC significantly informs patient outcomes and thus, has the potential to aid in the development of optimum personalised treatments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    Science.gov (United States)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  12. Heterogeneous distribution of water in the mantle transition zone beneath United States inferred from seismic observations

    Science.gov (United States)

    Wang, Y.; Pavlis, G. L.; Li, M.

    2017-12-01

    The amount of water in the Earth's deep mantle is critical for the evolution of the solid Earth and the atmosphere. Mineral physics studies have revealed that Wadsleyite and Ringwoodite in the mantle transition zone could store several times the volume of water in the ocean. However, the water content and its distribution in the transition zone remain enigmatic due to lack of direct observations. Here we use seismic data from the full deployment of the Earthscope Transportable Array to produce 3D image of P to S scattering of the mantle transition zone beneath the United States. We compute the image volume from 141,080 pairs of high quality receiver functions defined by the Earthscope Automated Receiver Survey, reprocessed by the generalized iterative deconvolution method and imaged by the plane wave migration method. We find that the transition zone is filled with previously unrecognized small-scale heterogeneities that produce pervasive, negative polarity P to S conversions. Seismic synthetic modeling using a point source simulation method suggests two possible structures for these objects: 1) a set of randomly distributed blobs of slight difference in size, and 2) near vertical diapir structures from small scale convections. Combining with geodynamic simulations, we interpret the observation as compositional heterogeneity from small-scale, low-velocity bodies that are water enriched. Our results indicate there is a heterogeneous distribution of water through the entire mantle transition zone beneath the contiguous United States.

  13. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  14. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  15. Quartile and Outlier Detection on Heterogeneous Clusters Using Distributed Radix Sort

    International Nuclear Information System (INIS)

    Meredith, Jeremy S.; Vetter, Jeffrey S.

    2011-01-01

    In the past few years, performance improvements in CPUs and memory technologies have outpaced those of storage systems. When extrapolated to the exascale, this trend places strict limits on the amount of data that can be written to disk for full analysis, resulting in an increased reliance on characterizing in-memory data. Many of these characterizations are simple, but require sorted data. This paper explores an example of this type of characterization - the identification of quartiles and statistical outliers - and presents a performance analysis of a distributed heterogeneous radix sort as well as an assessment of current architectural bottlenecks.

  16. An object-oriented framework for managing cooperating legacy databases

    NARCIS (Netherlands)

    Balsters, H; de Brock, EO

    2003-01-01

    We describe a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. Our approach to database federation is based on the UML/OCL data

  17. Optimistic protocol for partitioned distributed database systems

    International Nuclear Information System (INIS)

    Davidson, S.B.

    1982-01-01

    A protocol for transaction processing during partition failures is presented which guarantees mutual consistency between copies of data-items after repair is completed. The protocol is optimistic in that transactions are processed without restrictions during the failure; conflicts are detected at repair time using a precedence graph and are resolved by backing out transactions according to some backout strategy. The protocol is then evaluated using simulation and probabilistic modeling. In the simulation, several parameters are varied such as the number of transactions processed in a group, the type of transactions processed, the number of data-items present in the database, and the distribution of references to data-items. The simulation also uses different backout strategies. From these results we note conditions under which the protocol performs well, i.e., conditions under which the protocol backs out a small percentage of the transaction run. A probabilistic model is developed to estimate the expected number of transactions backed out using most of the above database and transaction parameters, and is shown to agree with simulation results. Suggestions are then made on how to improve the performance of the protocol. Insights gained from the simulation and probabilistic modeling are used to develop a backout strategy which takes into account individual transaction costs and attempts to minimize total backout cost. Although the problem of choosing transactions to minimize total backout cost is, in general, NP-complete, the backout strategy is efficient and produces very good results

  18. Predicting plant distribution in an heterogeneous Alpine landscape: does soil matter?

    Science.gov (United States)

    Buri, Aline; Cianfrani, Carmen; Pradervand, Jean-Nicolas; Guisan, Antoine

    2016-04-01

    Topographic and climatic factors are usually used to predict plant distribution because they are known to explain their presence or absence. Soil properties have been widely shown to influence plant growth and distributions. However, they are rarely taken into account as predictors of plant species distribution models (SDM) in an edaphically heterogeneous landscape. Or, when it happens, interpolation techniques are used to project soil factors in space. In heterogeneous landscape, such as in the Alps region, where soil properties change abruptly as a function of environmental conditions over short distances, interpolation techniques require a huge quantities of samples to be efficient. This is costly and time consuming, and bring more errors than predictive approach for an equivalent number of samples. In this study we aimed to assess whether soil proprieties may be generalized over entire mountainous geographic extents and can improve predictions of plant distributions over traditional topo-climatic predictors. First, we used a predictive approach to map two soil proprieties based on field measurements in the western Swiss Alps region; the soil pH and the ratio of stable isotopes 13C/12C (called δ13CSOM). We used ensemble forecasting techniques combining together several predictive algorithms to build models of the geographic variation in the values of both soil proprieties and projected them in the entire study area. As predictive factors, we employed very high resolution topo-climatic data. In a second step, output maps from the previous task were used as an input for vegetation regional models. We integrated the predicted soil proprieties to a set of basic topo-climatic predictors known to be important to model plants species. Then we modelled the distribution of 156 plant species inhabiting the study area. Finally, we compared the quality of the models having or not soil proprieties as predictors to evaluate their effect on the predictive power of our models

  19. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    Science.gov (United States)

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  20. Inference of R 0 and Transmission Heterogeneity from the Size Distribution of Stuttering Chains

    Science.gov (United States)

    Blumberg, Seth; Lloyd-Smith, James O.

    2013-01-01

    For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer , but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, ) has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in is detectable. In addition, by allowing for superspreading events, inference of shifts the threshold above which a transmission chain should be considered anomalously large for a given value of (thus reducing the probability of false alarms about pathogen adaptation). Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results. PMID:23658504

  1. Data Mining in Distributed Database of the First Egyptian Thermal Research Reactor (ETRR-1)

    International Nuclear Information System (INIS)

    Abo Elez, R.H.; Ayad, N.M.A.; Ghuname, A.A.A.

    2006-01-01

    Distributed database (DDB)technology application systems are growing up to cover many fields an domains, and at different levels. the aim of this paper is to shade some lights on applying the new technology of distributed database on the ETRR-1 operation data logged by the data acquisition system (DACQUS)and one can extract a useful knowledge. data mining with scientific methods and specialize tools is used to support the extraction of useful knowledge from the rapidly growing volumes of data . there are many shapes and forms for data mining methods. predictive methods furnish models capable of anticipating the future behavior of quantitative or qualitative database variables. when the relationship between the dependent an independent variables is nearly liner, linear regression method is the appropriate data mining strategy. so, multiple linear regression models have been applied to a set of data samples of the ETRR-1 operation data, using least square method. the results show an accurate analysis of the multiple linear regression models as applied to the ETRR-1 operation data

  2. A Context-Aware Adaptive Streaming Media Distribution System in a Heterogeneous Network with Multiple Terminals

    Directory of Open Access Journals (Sweden)

    Yepeng Ni

    2016-01-01

    Full Text Available We consider the problem of streaming media transmission in a heterogeneous network from a multisource server to home multiple terminals. In wired network, the transmission performance is limited by network state (e.g., the bandwidth variation, jitter, and packet loss. In wireless network, the multiple user terminals can cause bandwidth competition. Thus, the streaming media distribution in a heterogeneous network becomes a severe challenge which is critical for QoS guarantee. In this paper, we propose a context-aware adaptive streaming media distribution system (CAASS, which implements the context-aware module to perceive the environment parameters and use the strategy analysis (SA module to deduce the most suitable service level. This approach is able to improve the video quality for guarantying streaming QoS. We formulate the optimization problem of QoS relationship with the environment parameters based on the QoS testing algorithm for IPTV in ITU-T G.1070. We evaluate the performance of the proposed CAASS through 12 types of experimental environments using a prototype system. Experimental results show that CAASS can dynamically adjust the service level according to the environment variation (e.g., network state and terminal performances and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR.

  3. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  4. Tracer test modeling for characterizing heterogeneity and local scale residence time distribution in an artificial recharge site.

    Science.gov (United States)

    Valhondo, Cristina; Martinez-Landa, Lurdes; Carrera, Jesús; Hidalgo, Juan J.; Ayora, Carlos

    2017-04-01

    Artificial recharge of aquifers (AR) is a standard technique to replenish and enhance groundwater resources, that have widely been used due to the increasing demand of quality water. AR through infiltration basins consists on infiltrate surface water, that might be affected in more or less degree by treatment plant effluents, runoff and others undesirables water sources, into an aquifer. The water quality enhances during the passage through the soil and organic matter, nutrients, organic contaminants, and bacteria are reduced mainly due to biodegradation and adsorption. Therefore, one of the goals of AR is to ensure a good quality status of the aquifer even if lesser quality water is used for recharge. Understand the behavior and transport of the potential contaminants is essential for an appropriate management of the artificial recharge system. The knowledge of the flux distribution around the recharge system and the relationship between the recharge system and the aquifer (area affected by the recharge, mixing ratios of recharged and native groundwater, travel times) is essential to achieve this goal. Evaluate the flux distribution is not always simple because the complexity and heterogeneity of natural systems. Indeed, it is not so much regulate by hydraulic conductivity of the different geological units as by their continuity and inter-connectivity particularly in the vertical direction. In summary for an appropriate management of an artificial recharge system it is needed to acknowledge the heterogeneity of the media. Aiming at characterizing the residence time distribution (RTDs) of a pilot artificial recharge system and the extent to which heterogeneity affects RTDs, we performed and evaluated a pulse injection tracer test. The artificial recharge system was simulated as a multilayer model which was used to evaluate the measured breakthrough curves at six monitoring points. Flow and transport parameters were calibrated under two hypotheses. The first

  5. Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems

    Directory of Open Access Journals (Sweden)

    Hesam Izakian

    2009-07-01

    Full Text Available Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  6. The response-time distribution in a real-time database with optimistic concurrency control and constant execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  7. The response-time distribution in a real-time database with optimistic concurrency control and exponential execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction takes an exponential execution

  8. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    Science.gov (United States)

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  9. Wide-area-distributed storage system for a multimedia database

    Science.gov (United States)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  10. The online database MaarjAM reveals global and ecosystemic distribution patterns in arbuscular mycorrhizal fungi (Glomeromycota).

    Science.gov (United States)

    Opik, M; Vanatoa, A; Vanatoa, E; Moora, M; Davison, J; Kalwij, J M; Reier, U; Zobel, M

    2010-10-01

    • Here, we describe a new database, MaarjAM, that summarizes publicly available Glomeromycota DNA sequence data and associated metadata. The goal of the database is to facilitate the description of distribution and richness patterns in this group of fungi. • Small subunit (SSU) rRNA gene sequences and available metadata were collated from all suitable taxonomic and ecological publications. These data have been made accessible in an open-access database (http://maarjam.botany.ut.ee). • Two hundred and eighty-two SSU rRNA gene virtual taxa (VT) were described based on a comprehensive phylogenetic analysis of all collated Glomeromycota sequences. Two-thirds of VT showed limited distribution ranges, occurring in single current or historic continents or climatic zones. Those VT that associated with a taxonomically wide range of host plants also tended to have a wide geographical distribution, and vice versa. No relationships were detected between VT richness and latitude, elevation or vascular plant richness. • The collated Glomeromycota molecular diversity data suggest limited distribution ranges in most Glomeromycota taxa and a positive relationship between the width of a taxon's geographical range and its host taxonomic range. Inconsistencies between molecular and traditional taxonomy of Glomeromycota, and shortage of data from major continents and ecosystems, are highlighted.

  11. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  12. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  13. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  14. Inference of R(0 and transmission heterogeneity from the size distribution of stuttering chains.

    Directory of Open Access Journals (Sweden)

    Seth Blumberg

    Full Text Available For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, [Formula: see text] and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring [Formula: see text] and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer [Formula: see text], but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, [Formula: see text] has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both [Formula: see text] and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for [Formula: see text] is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in [Formula: see text] is detectable. In addition, by allowing for superspreading events, inference of [Formula: see text] shifts the threshold above which a transmission chain should be considered anomalously large for a given value of [Formula: see text] (thus reducing the probability of false alarms about pathogen adaptation. Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results.

  15. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-01

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  16. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes.

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-21

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  17. Information system architecture to support transparent access to distributed, heterogeneous data sources

    International Nuclear Information System (INIS)

    Brown, J.C.

    1994-08-01

    Quality situation assessment and decision making require access to multiple sources of data and information. Insufficient accessibility to data exists for many large corporations and Government agencies. By utilizing current advances in computer technology, today's situation analyst's have a wealth of information at their disposal. There are many potential solutions to the information accessibility problem using today's technology. The United States Department of Energy (US-DOE) faced this problem when dealing with one class of problem in the US. The result of their efforts has been the creation of the Tank Waste Information Network System -- TWINS. The TWINS solution combines many technologies to address problems in several areas such as User Interfaces, Transparent Access to Multiple Data Sources, and Integrated Data Access. Data related to the complex is currently distributed throughout several US-DOE installations. Over time, each installation has adopted their own set of standards as related to information management. Heterogeneous hardware and software platforms exist both across the complex and within a single installation. Standards for information management vary between US-DOE mission areas within installations. These factors contribute to the complexity of accessing information in a manner that enhances the performance and decision making process of the analysts. This paper presents one approach taken by the DOE to resolve the problem of distributed, heterogeneous, multi-media information management for the HLW Tank complex. The information system architecture developed for the DOE by the TWINS effort is one that is adaptable to other problem domains and uses

  18. Coordinating complex decision support activities across distributed applications

    Science.gov (United States)

    Adler, Richard M.

    1994-01-01

    Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.

  19. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  20. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    Directory of Open Access Journals (Sweden)

    Berretty Robert-Paul

    2007-01-01

    Full Text Available Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone, requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation, we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML, enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  1. Heat transfer enhancement in a natural draft dry cooling tower under crosswind operation with heterogeneous water distribution

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Mohsen; Amooie, Hossein [Bu-Ali Sina Univ., Hamedan (Iran, Islamic Republic of). Dept. of Mechanical Engineering

    2016-04-15

    Crosswind significantly decreases cooling efficiency of a natural draft dry cooling tower. The possibility of improving cooling efficiency with heterogeneous water distribution within the cooling tower radiators under crosswind condition is analysed. A CFD approach was used to model the flow field and heat transfer phenomena within the cooling tower and airflow surrounding the cooling tower. A mathematical model was developed from various CFD results. Having used a trained Genetic Algorithm with the result of mathematical model, the best water distribution was found among the others. Remodeling the best water distribution with the CFD approach showed that the highest enhancement of the heat transfer compared to the usual uniform water distribution.

  2. Heat transfer enhancement in a natural draft dry cooling tower under crosswind operation with heterogeneous water distribution

    International Nuclear Information System (INIS)

    Goodarzi, Mohsen; Amooie, Hossein

    2016-01-01

    Crosswind significantly decreases cooling efficiency of a natural draft dry cooling tower. The possibility of improving cooling efficiency with heterogeneous water distribution within the cooling tower radiators under crosswind condition is analysed. A CFD approach was used to model the flow field and heat transfer phenomena within the cooling tower and airflow surrounding the cooling tower. A mathematical model was developed from various CFD results. Having used a trained Genetic Algorithm with the result of mathematical model, the best water distribution was found among the others. Remodeling the best water distribution with the CFD approach showed that the highest enhancement of the heat transfer compared to the usual uniform water distribution.

  3. Earthquake scaling laws for rupture geometry and slip heterogeneity

    Science.gov (United States)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  4. Reliability parameters of distribution networks components

    Energy Technology Data Exchange (ETDEWEB)

    Gono, R.; Kratky, M.; Rusek, S.; Kral, V. [Technical Univ. of Ostrava (Czech Republic)

    2009-03-11

    This paper presented a framework for the retrieval of parameters from various heterogenous power system databases. The framework was designed to transform the heterogenous outage data in a common relational scheme. The framework was used to retrieve outage data parameters from the Czech and Slovak republics in order to demonstrate the scalability of the framework. A reliability computation of the system was computed in 2 phases representing the retrieval of component reliability parameters and the reliability computation. Reliability rates were determined using component reliability and global reliability indices. Input data for the reliability was retrieved from data on equipment operating under similar conditions, while the probability of failure-free operations was evaluated by determining component status. Anomalies in distribution outage data were described as scheme, attribute, and term differences. Input types consisted of input relations; transformation programs; codebooks; and translation tables. The system was used to successfully retrieve data from 7 distributors in the Czech Republic and Slovak Republic between 2000-2007. The database included 301,555 records. Data were queried using SQL language. 29 refs., 2 tabs., 2 figs.

  5. Quantitative multi-scale analysis of mineral distributions and fractal pore structures for a heterogeneous Junger Basin shale

    International Nuclear Information System (INIS)

    Wang, Y.D.; Ren, Y.Q.; Hu, T.; Deng, B.; Xiao, T.Q.; Liu, K.Y.; Yang, Y.S.

    2016-01-01

    Three dimensional (3D) characterization of shales has recently attracted wide attentions in relation to the growing importance of shale oil and gas. Obtaining a complete 3D compositional distribution of shale has proven to be challenging due to its multi-scale characteristics. A combined multi-energy X-ray micro-CT technique and data-constrained modelling (DCM) approach has been used to quantitatively investigate the multi-scale mineral and porosity distributions of a heterogeneous shale from the Junger Basin, northwestern China by sub-sampling. The 3D sub-resolution structures of minerals and pores in the samples are quantitatively obtained as the partial volume fraction distributions, with colours representing compositions. The shale sub-samples from two areas have different physical structures for minerals and pores, with the dominant minerals being feldspar and dolomite, respectively. Significant heterogeneities have been observed in the analysis. The sub-voxel sized pores form large interconnected clusters with fractal structures. The fractal dimensions of the largest clusters for both sub-samples were quantitatively calculated and found to be 2.34 and 2.86, respectively. The results are relevant in quantitative modelling of gas transport in shale reservoirs

  6. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  7. Addressing population heterogeneity and distribution in epidemics models using a cellular automata approach.

    Science.gov (United States)

    López, Leonardo; Burguerner, Germán; Giovanini, Leonardo

    2014-04-12

    The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the

  8. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  9. Isotoxic dose escalation in the treatment of lung cancer by means of heterogeneous dose distributions in the presence of respiratory motion

    DEFF Research Database (Denmark)

    Baker, Mariwan; Nielsen, Morten; Hansen, Olfred

    2011-01-01

    To test, in the presence of intrafractional respiration movement, a margin recipe valid for a homogeneous and conformal dose distribution and to test whether the use of smaller margins combined with heterogeneous dose distributions allows an isotoxic dose escalation when respiratory motion...

  10. A Secure Scheme for Distributed Consensus Estimation against Data Falsification in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shichao Mi

    2016-02-01

    Full Text Available Heterogeneous wireless sensor networks (HWSNs can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs have more computation power, while relay nodes (RNs with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.

  11. Glycogen distribution in the microwave-fixed mouse brain reveals heterogeneous astrocytic patterns.

    Science.gov (United States)

    Oe, Yuki; Baba, Otto; Ashida, Hitoshi; Nakamura, Kouichi C; Hirase, Hajime

    2016-09-01

    In the brain, glycogen metabolism has been implied in synaptic plasticity and learning, yet the distribution of this molecule has not been fully described. We investigated cerebral glycogen of the mouse by immunohistochemistry (IHC) using two monoclonal antibodies that have different affinities depending on the glycogen size. The use of focused microwave irradiation yielded well-defined glycogen immunoreactive signals compared with the conventional periodic acid-Schiff method. The IHC signals displayed a punctate distribution localized predominantly in astrocytic processes. Glycogen immunoreactivity (IR) was high in the hippocampus, striatum, cortex, and cerebellar molecular layer, whereas it was low in the white matter and most of the subcortical structures. Additionally, glycogen distribution in the hippocampal CA3-CA1 and striatum had a 'patchy' appearance with glycogen-rich and glycogen-poor astrocytes appearing in alternation. The glycogen patches were more evident with large-molecule glycogen in young adult mice but they were hardly observable in aged mice (1-2 years old). Our results reveal brain region-dependent glycogen accumulation and possibly metabolic heterogeneity of astrocytes. GLIA 2016;64:1532-1545. © 2016 The Authors. Glia Published by Wiley Periodicals, Inc.

  12. Glycogen distribution in the microwave‐fixed mouse brain reveals heterogeneous astrocytic patterns

    Science.gov (United States)

    Baba, Otto; Ashida, Hitoshi; Nakamura, Kouichi C.

    2016-01-01

    In the brain, glycogen metabolism has been implied in synaptic plasticity and learning, yet the distribution of this molecule has not been fully described. We investigated cerebral glycogen of the mouse by immunohistochemistry (IHC) using two monoclonal antibodies that have different affinities depending on the glycogen size. The use of focused microwave irradiation yielded well‐defined glycogen immunoreactive signals compared with the conventional periodic acid‐Schiff method. The IHC signals displayed a punctate distribution localized predominantly in astrocytic processes. Glycogen immunoreactivity (IR) was high in the hippocampus, striatum, cortex, and cerebellar molecular layer, whereas it was low in the white matter and most of the subcortical structures. Additionally, glycogen distribution in the hippocampal CA3‐CA1 and striatum had a ‘patchy’ appearance with glycogen‐rich and glycogen‐poor astrocytes appearing in alternation. The glycogen patches were more evident with large‐molecule glycogen in young adult mice but they were hardly observable in aged mice (1–2 years old). Our results reveal brain region‐dependent glycogen accumulation and possibly metabolic heterogeneity of astrocytes. GLIA 2016;64:1532–1545 PMID:27353480

  13. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  14. Imaging geochemical heterogeneities using inverse reactive transport modeling: An example relevant for characterizing arsenic mobilization and distribution

    DEFF Research Database (Denmark)

    Fakhreddine, Sarah; Lee, Jonghyun; Kitanidis, Peter K.

    2016-01-01

    groundwater parameters. Specifically, we simulate the mobilization of arsenic via kinetic oxidative dissolution of As-bearing pyrite due to dissolved oxygen in the ambient groundwater, which is an important mechanism for arsenic release in groundwater both under natural conditions and engineering applications......The spatial distribution of reactive minerals in the subsurface is often a primary factor controlling the fate and transport of contaminants in groundwater systems. However, direct measurement and estimation of heterogeneously distributed minerals are often costly and difficult to obtain. While...

  15. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  16. Heterogeneity in the WTP for recreational access

    DEFF Research Database (Denmark)

    Campbell, Danny; Vedel, Suzanne Elizabeth; Thorsen, Bo Jellesmark

    2014-01-01

    In this study we have addressed appropriate modelling of heterogeneity in willingness to pay (WTP) for environmental goods, and have demonstrated its importance using a case of forest access in Denmark. We compared WTP distributions for four models: (1) a multinomial logit model, (2) a mixed logit...... model assuming a univariate Normal distribution, (3) or assuming a multivariate Normal distribution allowing for correlation across attributes, and (4) a mixture of two truncated Normal distributions, allowing for correlation among attributes. In the first two models mean WTP for enhanced access...... was negative. However, models accounting for preference heterogeneity found a positive mean WTP, but a large sub-group with negative WTP. Accounting for preference heterogeneity can alter overall conclusions, which highlights the importance of this for policy recommendations....

  17. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  18. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  19. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  20. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  1. Synchronous message-based communication for distributed heterogeneous systems

    International Nuclear Information System (INIS)

    Wilkinson, N.; Dohan, D.

    1992-01-01

    The use of a synchronous, message-based real-time operating system (Unison) as the basis of transparent interprocess and inter-processor communication over VME-bus is described. The implementation of a synchronous, message-based protocol for network communication between heterogeneous systems is discussed. In particular, the design and implementation of a message-based session layer over a virtual circuit transport layer protocol using UDP/IP is described. Inter-process communication is achieved via a message-based semantic which is portable by virtue of its ease of implementation in other operating system environments. Protocol performance for network communication among heterogeneous architecture is presented, including VMS, Unix, Mach and Unison. (author)

  2. Multilingual Federated Searching Across Heterogeneous Collections.

    Science.gov (United States)

    Powell, James; Fox, Edward A.

    1998-01-01

    Describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. Details Searchable Database Markup Language (SearchDB-ML) for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages. (Author)

  3. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database

    International Nuclear Information System (INIS)

    Ochs, Michael; Saito, Yoshihiko; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Yui, Mikazu

    2007-03-01

    Japan Atomic Energy Agency (JAEA) has developed the sorption database (JNC-SDB) for bentonite and rocks in order to assess the retardation property of important radioactive elements in natural and engineered barriers in the H12 report. The database includes distribution coefficient (K d ) of important radionuclides. The K d values in the SDB are about 20,000 data. The SDB includes a great variety of K d and additional key information from many different literatures. Accordingly, the classification guideline and classification system were developed in order to evaluate the reliability of each K d value (Th, Pa, U, Np, Pu, Am, Cm, Cs, Ra, Se, Tc on bentonite). The reliability of 3740 K d values are evaluated and categorized. (author)

  4. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  5. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  6. On the spatial distribution of the transpiration and soil moisture of a Mediterranean heterogeneous ecosystem in water-limited conditions.

    Science.gov (United States)

    Curreli, Matteo; Corona, Roberto; Montaldo, Nicola; Albertson, John D.; Oren, Ram

    2014-05-01

    Mediterranean ecosystems are characterized by a strong heterogeneity, and often by water-limited conditions. In these conditions contrasting plant functional types (PFT, e.g. grass and woody vegetation) compete for the water use. Both the vegetation cover spatial distribution and the soil properties impact the soil moisture (SM) spatial distribution. Indeed, vegetation cover density and type affects evapotranspiration (ET), which is the main lack of the soil water balance in these ecosystems. With the objective to carefully estimate SM and ET spatial distribution in a Mediterranean water-limited ecosystem and understanding SM and ET relationships, an extended field campaign is carried out. The study was performed in a heterogeneous ecosystem in Orroli, Sardinia (Italy). The experimental site is a typical Mediterranean ecosystem where the vegetation is distributed in patches of woody vegetation (wild olives mainly) and grass. Soil depth is low and spatially varies between 10 cm and 40 cm, without any correlation with the vegetation spatial distribution. ET, land-surface fluxes and CO2 fluxes are estimated by an eddy covariance technique based micrometeorological tower. But in heterogeneous ecosystems a key assumption of the eddy covariance theory, the homogeneity of the surface, is not preserved and the ET estimate may be not correct. Hence, we estimate ET of the woody vegetation using the thermal dissipation method (i.e. sap flow technique) for comparing the two methodologies. Due the high heterogeneity of the vegetation and soil properties of the field a total of 54 sap flux sensors were installed. 14 clumps of wild olives within the eddy covariance footprint were identified as the most representative source of flux and they were instrumented with the thermal dissipation probes. Measurements of diameter at the height of sensor installation (height of 0.4 m above ground) were recorded in all the clumps. Bark thickness and sapwood depth were measured on several

  7. Nuclear analysis of the Chornobyl fuel containing masses with heterogeneous fuel distribution

    International Nuclear Information System (INIS)

    Turski, R. B.

    1998-01-01

    Although significant data has been obtained on the condition and composition of the fuel containing masses (FCM) located in the concrete chambers under the Chernobyl Unit 4 reactor cavity, there is still uncertainty regarding the possible recriticality of this material. The high radiation levels make access extremely difficult, and most of the samples are from the FCM surface regions. There is little information on the interior regions of the FCM, and one cannot assume with confidence that the surface measurements are representative of the interior regions. Therefore, reasonable assumptions on the key parameters such as fuel concentration, the concentrations of impurities and neutron poisons (especially boron), the void fraction of the FCM due to its known porosity, and the degrees of fuel heterogeneity, are necessary to evaluate the possibility of recriticality. The void fraction is important since it introduces the possibility of water moderator being distributed throughout the FCM. Calculations indicate that the addition of 10 to 30 volume percent (v/o) water to the FCM has a significant impact on the calculated reactivity of the FCM. Therefore, water addition must be considered carefully. The other possible moderators are graphite and silicone dioxide. As discussed later in this paper, silicone dioxide moderation does not represent a criticality threat. For graphite, both heterogeneous fuel arrangements and very large volume fractions of graphite are necessary for a graphite moderated system to go critical. Based on the observations and measurements of the FCM compositions, these conditions do not appear creditable for the Chernobyl FCM. Therefore, the focus of the analysis reported in this paper will be on reasonable heterogeneous fuel arrangements and water moderation. The analysis will evaluate a range of fuel and diluent compositions

  8. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  9. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  10. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  11. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  12. Chloride Transport in Heterogeneous Formation

    Science.gov (United States)

    Mukherjee, A.; Holt, R. M.

    2017-12-01

    The chloride mass balance (CMB) is a commonly-used method for estimating groundwater recharge. Observations of the vertical distribution of pore-water chloride are related to the groundwater infiltration rates (i.e. recharge rates). In CMB method, the chloride distribution is attributed mainly to the assumption of one dimensional piston flow. In many places, however, the vertical distribution of chloride will be influenced by heterogeneity, leading to horizontal movement of infiltrating waters. The impact of heterogeneity will be particularly important when recharge is locally focused. When recharge is focused in an area, horizontal movement of chloride-bearing waters, coupled with upward movement driven by evapotranspiration, may lead to chloride bulges that could be misinterpreted if the CMB method is used to estimate recharge. We numerically simulate chloride transport and evaluate the validity of the CMB method in highly heterogeneous systems. This simulation is conducted for the unsaturated zone of Ogallala, Antlers, and Gatuna (OAG) formations in Andrews County, Texas. A two dimensional finite element model will show the movement of chloride through heterogeneous systems. We expect to see chloride bulges not only close to the surface but also at depths characterized by horizontal or upward movement. A comparative study of focused recharge estimates in this study with available recharge data will be presented.

  13. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  14. 1.15 - Structural Chemogenomics Databases to Navigate Protein–Ligand Interaction Space

    NARCIS (Netherlands)

    Kanev, G.K.; Kooistra, A.J.; de Esch, I.J.P.; de Graaf, C.

    2017-01-01

    Structural chemogenomics databases allow the integration and exploration of heterogeneous genomic, structural, chemical, and pharmacological data in order to extract useful information that is applicable for the discovery of new protein targets and biologically active molecules. Integrated databases

  15. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    International Nuclear Information System (INIS)

    Onut, S; Kamber, M R; Altay, G

    2014-01-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time

  16. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    Science.gov (United States)

    Onut, S.; Kamber, M. R.; Altay, G.

    2014-03-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.

  17. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  18. Noninvasive In-Vivo Quantification of Mechanical Heterogeneity of Invasive Breast Carcinomas.

    Directory of Open Access Journals (Sweden)

    Tengxiao Liu

    Full Text Available Heterogeneity is a hallmark of cancer whether one considers the genotype of cancerous cells, the composition of their microenvironment, the distribution of blood and lymphatic microvasculature, or the spatial distribution of the desmoplastic reaction. It is logical to expect that this heterogeneity in tumor microenvironment will lead to spatial heterogeneity in its mechanical properties. In this study we seek to quantify the mechanical heterogeneity within malignant and benign tumors using ultrasound based elasticity imaging. By creating in-vivo elastic modulus images for ten human subjects with breast tumors, we show that Young's modulus distribution in cancerous breast tumors is more heterogeneous when compared with tumors that are not malignant, and that this signature may be used to distinguish malignant breast tumors. Our results complement the view of cancer as a heterogeneous disease on multiple length scales by demonstrating that mechanical properties within cancerous tumors are also spatially heterogeneous.

  19. Distributed open environment for data retrieval based on pattern recognition techniques

    International Nuclear Information System (INIS)

    Pereira, A.; Vega, J.; Castro, R.; Portas, A.

    2010-01-01

    Pattern recognition methods for data retrieval have been applied to fusion databases for the localization and extraction of similar waveforms within temporal evolution signals. In order to standardize the use of these methods, a distributed open environment has been designed. It is based on a client/server architecture that supports distribution, interoperability and portability between heterogeneous platforms. The server part is a single desktop application based on J2EE (Java 2 Enterprise Edition), which provides a mature standard framework and a modular architecture. It can handle transactions and concurrency of components that are deployed on JETTY, an embedded web container within the Java server application for providing HTTP services. The data management is based on Apache DERBY, a relational database engine also embedded on the same Java based solution. This encapsulation allows hiding of unnecessary details about the installation, distribution, and configuration of all these components but with the flexibility to create and allocate many databases on different servers. The DERBY network module increases the scope of the installed database engine by providing traditional Java database network connections (JDBC-TCP/IP). This avoids scattering several database engines (a unique embedded engine defines the rules for accessing the distributed data). Java thin clients (Java 5 or above is the unique requirement) can be executed in the same computer than the server program (for example a desktop computer) but also server and client software can be distributed in a remote participation environment (wide area networks). The thin client provides graphic user interface to look for patterns (entire waveforms or specific structural forms) and display the most similar ones. This is obtained with HTTP requests and by generating dynamic content (servlets) in response to these client requests.

  20. Distributed Open Environment for Data Retrieval based on Pattern Recognition Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, A.; Vega, J.; Castro, R.; Portas, A. [Association EuratomCIEMAT para Fusion, Madrid (Spain)

    2009-07-01

    Full text of publication follows: Pattern recognition methods for data retrieval have been applied to fusion databases for the localization and extraction of similar waveforms within temporal evolution signals. In order to standardize the use of these methods, a distributed open environment has been designed. It is based on a client/server architecture that supports distribution, inter-operability and portability between heterogeneous platforms. The server part is a single desktop application based on J2EE, which provides a mature standard framework and a modular architecture. It can handle transactions and competition of components that are deployed on JETTY, an embedded web container within the Java server application for providing HTTP services. The data management is based on Apache DERBY, a relational database engine also embedded on the same Java based solution. This encapsulation allows concealment of unnecessary details about the installation, distribution, and configuration of all these components but with the flexibility to create and allocate many databases on different servers. The DERBY network module increases the scope of the installed database engine by providing traditional Java database network connections (JDBC-TCP/IP). This avoids scattering several database engines (a unique embedded engine defines the rules for accessing the distributed data). Java thin clients (Java 5 or above is the unique requirement) can be executed in the same computer than the server program (for example a desktop computer) but also server and client software can be distributed in a remote participation environment (wide area networks). The thin client provides graphic user interface to look for patterns (entire waveforms or specific structural forms) and display the most similar ones. This is obtained with HTTP requests and by generating dynamic content (servlets) in response to these client requests. (authors)

  1. Distributed open environment for data retrieval based on pattern recognition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, A., E-mail: augusto.pereira@ciemat.e [Asociacion EURATOM/CIEMAT para Fusion, CIEMAT, Edificio 66, Avda. Complutense, 22, 28040 Madrid (Spain); Vega, J.; Castro, R.; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, CIEMAT, Edificio 66, Avda. Complutense, 22, 28040 Madrid (Spain)

    2010-07-15

    Pattern recognition methods for data retrieval have been applied to fusion databases for the localization and extraction of similar waveforms within temporal evolution signals. In order to standardize the use of these methods, a distributed open environment has been designed. It is based on a client/server architecture that supports distribution, interoperability and portability between heterogeneous platforms. The server part is a single desktop application based on J2EE (Java 2 Enterprise Edition), which provides a mature standard framework and a modular architecture. It can handle transactions and concurrency of components that are deployed on JETTY, an embedded web container within the Java server application for providing HTTP services. The data management is based on Apache DERBY, a relational database engine also embedded on the same Java based solution. This encapsulation allows hiding of unnecessary details about the installation, distribution, and configuration of all these components but with the flexibility to create and allocate many databases on different servers. The DERBY network module increases the scope of the installed database engine by providing traditional Java database network connections (JDBC-TCP/IP). This avoids scattering several database engines (a unique embedded engine defines the rules for accessing the distributed data). Java thin clients (Java 5 or above is the unique requirement) can be executed in the same computer than the server program (for example a desktop computer) but also server and client software can be distributed in a remote participation environment (wide area networks). The thin client provides graphic user interface to look for patterns (entire waveforms or specific structural forms) and display the most similar ones. This is obtained with HTTP requests and by generating dynamic content (servlets) in response to these client requests.

  2. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems

    Science.gov (United States)

    Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.

    2018-05-01

    This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.

  3. Unobserved heterogeneity in the power law nonhomogeneous Poisson process

    International Nuclear Information System (INIS)

    Asfaw, Zeytu Gashaw; Lindqvist, Bo Henry

    2015-01-01

    A study of possible consequences of heterogeneity in the failure intensity of repairable systems is presented. The basic model studied is the nonhomogeneous Poisson process with power law intensity function. When several similar systems are under observation, the assumption that the corresponding processes are independent and identically distributed is often questionable. In practice there may be an unobserved heterogeneity among the systems. The heterogeneity is modeled by introduction of unobserved gamma distributed frailties. The relevant likelihood function is derived, and maximum likelihood estimation is illustrated. In a simulation study we then compare results when using a power law model without taking into account heterogeneity, with the corresponding results obtained when the heterogeneity is accounted for. A motivating data example is also given. - Highlights: • Consequences of overlooking heterogeneity in similar repairable systems are studied. • Likelihood functions are established for power law NHPP w/ and w/o heterogeneity. • ML estimators for parameters of power law NHPP with heterogeneity are derived. • A simulation study shows the effects of heterogeneity and its ignorance in models

  4. Distributed Service Discovery for Heterogeneous Wireless Sensor Networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Scholten, Johan; Havinga, Paul J.M.

    Service discovery in heterogeneous Wireless Sensor Networks is a challenging research objective, due to the inherent limitations of sensor nodes and their extensive and dense deployment. The protocols proposed for ad hoc networks are too heavy for sensor environments. This paper presents a

  5. Heterogeneous continuous-time random walks

    Science.gov (United States)

    Grebenkov, Denis S.; Tupikina, Liubov

    2018-01-01

    We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.

  6. [Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].

    Science.gov (United States)

    Zhao, Nan; Liu, Yi; Chen, Ji-Ning

    2009-11-01

    In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.

  7. Elucidating the impact of micro-scale heterogeneous bacterial distribution on biodegradation

    Science.gov (United States)

    Schmidt, Susanne I.; Kreft, Jan-Ulrich; Mackay, Rae; Picioreanu, Cristian; Thullner, Martin

    2018-06-01

    Groundwater microorganisms hardly ever cover the solid matrix uniformly-instead they form micro-scale colonies. To which extent such colony formation limits the bioavailability and biodegradation of a substrate is poorly understood. We used a high-resolution numerical model of a single pore channel inhabited by bacterial colonies to simulate the transport and biodegradation of organic substrates. These high-resolution 2D simulation results were compared to 1D simulations that were based on effective rate laws for bioavailability-limited biodegradation. We (i) quantified the observed bioavailability limitations and (ii) evaluated the applicability of previously established effective rate concepts if microorganisms are heterogeneously distributed. Effective bioavailability reductions of up to more than one order of magnitude were observed, showing that the micro-scale aggregation of bacterial cells into colonies can severely restrict the bioavailability of a substrate and reduce in situ degradation rates. Effective rate laws proved applicable for upscaling when using the introduced effective colony sizes.

  8. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    Science.gov (United States)

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  9. A dedicated database system for handling multi-level data in systems biology.

    Science.gov (United States)

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  10. Spatial heterogeneity analysis of brain activation in fMRI

    Directory of Open Access Journals (Sweden)

    Lalit Gupta

    2014-01-01

    Full Text Available In many brain diseases it can be qualitatively observed that spatial patterns in blood oxygenation level dependent (BOLD activation maps appear more (diffusively distributed than in healthy controls. However, measures that can quantitatively characterize this spatial distributiveness in individual subjects are lacking. In this study, we propose a number of spatial heterogeneity measures to characterize brain activation maps. The proposed methods focus on different aspects of heterogeneity, including the shape (compactness, complexity in the distribution of activated regions (fractal dimension and co-occurrence matrix, and gappiness between activated regions (lacunarity. To this end, functional MRI derived activation maps of a language and a motor task were obtained in language impaired children with (Rolandic epilepsy and compared to age-matched healthy controls. Group analysis of the activation maps revealed no significant differences between patients and controls for both tasks. However, for the language task the activation maps in patients appeared more heterogeneous than in controls. Lacunarity was the best measure to discriminate activation patterns of patients from controls (sensitivity 74%, specificity 70% and illustrates the increased irregularity of gaps between activated regions in patients. The combination of heterogeneity measures and a support vector machine approach yielded further increase in sensitivity and specificity to 78% and 80%, respectively. This illustrates that activation distributions in impaired brains can be complex and more heterogeneous than in normal brains and cannot be captured fully by a single quantity. In conclusion, heterogeneity analysis has potential to robustly characterize the increased distributiveness of brain activation in individual patients.

  11. Determination of stress distribution in III-V single crystal layers for heterogeneous integration applications

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.; Hayashi, S. [Dept. of Materials Science and Engineering, University of California, Los Angeles, CA 90095 (United States); Goorsky, M.S.; Sandhu, R.; Chang-Chien, P.; Gutierrez-Aitken, A.; Tsai, R. [Northrop Grumman Space Technology, Redondo Beach, CA 90278 (United States); Noori, A.; Poust, B. [Dept. of Materials Science and Engineering, University of California, Los Angeles, CA 90095 (United States); Northrop Grumman Space Technology, Redondo Beach, CA 90278 (United States)

    2007-08-15

    Double crystal X-ray diffraction imaging and a variable temperature stage are employed to determine the stress distribution in heterogeneous wafer bonded layers though the superposition of images produced at different rocking curve angles. The stress distribution in InP layers transferred to a silicon substrate at room temperature exhibits an anticlastic deformation, with different regions of the wafer experiencing different signs of curvature. Measurements at elevated temperatures ({<=}125 C) reveals that differences in thermal expansion coefficients dominate the stress and that interfacial particulates introduce very high local stress gradients that increase with increased temperature. For thinned GaAs substrates (100 {mu}m) bonded using patterned metal interlayers to a separate GaAs substrate at {approx}200 C, residual stresses are produced at room temperature due to local stress points from metallization contacts and vias and the complex stress patterns can be observed using the diffraction imaging technique. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Determination of the relative power density distribution in a heterogeneous reactor from the results of measurements of the reactivity effects and the neutron importance function

    International Nuclear Information System (INIS)

    Bobrov, A. A.; Glushkov, E. S.; Zimin, A. A.; Kapitonova, A. V.; Kompaniets, G. V.; Nosov, V. I.; Petrushenko, R. P.; Smirnov, O. N.

    2012-01-01

    A method for experimental determination of the relative power density distribution in a heterogeneous reactor based on measurements of fuel reactivity effects and importance of neutrons from a californium source is proposed. The method was perfected on two critical assembly configurations at the NARCISS facility of the Kurchatov Institute, which simulated a small-size heterogeneous nuclear reactor. The neutron importance measurements were performed on subcritical and critical assemblies. It is shown that, along with traditionally used activation methods, the developed method can be applied to experimental studies of special features of the power density distribution in critical assemblies and reactors.

  13. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  14. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  15. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  16. Generic Entity Resolution in Relational Databases

    Science.gov (United States)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  17. Efficiency effects of observed and unobserved heterogeneity: Evidence from Norwegian electricity distribution networks

    International Nuclear Information System (INIS)

    Growitsch, Christian; Jamasb, Tooraj; Wetzel, Heike

    2012-01-01

    Since the 1990s, efficiency and benchmarking analysis has increasingly been used in network utilities research and regulation. A recurrent concern is the effect of observable environmental factors that are beyond the influence of firms and unobserved factors that are not identifiable on measured cost and quality performance of firms. This paper analyses the effect of observed geographic and weather factors and unobserved heterogeneity on a set of 128 Norwegian electricity distribution utilities for the 2001–2004 period. We utilise data on 78 geographic and weather variables to identify real economic inefficiency while controlling for observed and unobserved heterogeneity. We use the Factor Analysis technique to reduce the number of environmental factors into few composite variables and to avoid the problem of multicollinearity. In order to identify firm-specific inefficiency, we then estimate a pooled version of the established stochastic frontier model of Aigner et al. (1977) and the recent true random effects model of Greene (2004; 2005a,b) without and with environmental variables. The results indicate that the observed environmental factors have a rather limited influence on the utilities' average efficiency and the efficiency rankings. Moreover, the difference between the average efficiency scores and the efficiency rankings among the pooled and the true random effects models imply that the type of SFA model used is highly influencing the efficiency estimates.

  18. Intratumor heterogeneity alters most effective drugs in designed combinations.

    Science.gov (United States)

    Zhao, Boyang; Hemann, Michael T; Lauffenburger, Douglas A

    2014-07-22

    The substantial spatial and temporal heterogeneity observed in patient tumors poses considerable challenges for the design of effective drug combinations with predictable outcomes. Currently, the implications of tissue heterogeneity and sampling bias during diagnosis are unclear for selection and subsequent performance of potential combination therapies. Here, we apply a multiobjective computational optimization approach integrated with empirical information on efficacy and toxicity for individual drugs with respect to a spectrum of genetic perturbations, enabling derivation of optimal drug combinations for heterogeneous tumors comprising distributions of subpopulations possessing these perturbations. Analysis across probabilistic samplings from the spectrum of various possible distributions reveals that the most beneficial (considering both efficacy and toxicity) set of drugs changes as the complexity of genetic heterogeneity increases. Importantly, a significant likelihood arises that a drug selected as the most beneficial single agent with respect to the predominant subpopulation in fact does not reside within the most broadly useful drug combinations for heterogeneous tumors. The underlying explanation appears to be that heterogeneity essentially homogenizes the benefit of drug combinations, reducing the special advantage of a particular drug on a specific subpopulation. Thus, this study underscores the importance of considering heterogeneity in choosing drug combinations and offers a principled approach toward designing the most likely beneficial set, even if the subpopulation distribution is not precisely known.

  19. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  20. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  1. Renewal-anomalous-heterogeneous files

    International Nuclear Information System (INIS)

    Flomenbom, Ophir

    2010-01-01

    Renewal-anomalous-heterogeneous files are solved. A simple file is made of Brownian hard spheres that diffuse stochastically in an effective 1D channel. Generally, Brownian files are heterogeneous: the spheres' diffusion coefficients are distributed and the initial spheres' density is non-uniform. In renewal-anomalous files, the distribution of waiting times for individual jumps is not exponential as in Brownian files, yet obeys: ψ α (t)∼t -1-α , 0 2 >, obeys, 2 >∼ 2 > nrml α , where 2 > nrml is the MSD in the corresponding Brownian file. This scaling is an outcome of an exact relation (derived here) connecting probability density functions of Brownian files and renewal-anomalous files. It is also shown that non-renewal-anomalous files are slower than the corresponding renewal ones.

  2. The 2005 Tarapaca, Chile, Intermediate-depth Earthquake: Evidence of Heterogeneous Fluid Distribution Across the Plate?

    Science.gov (United States)

    Kuge, K.; Kase, Y.; Urata, Y.; Campos, J.; Perez, A.

    2008-12-01

    The physical mechanism of intermediate-depth earthquakes remains unsolved, and dehydration embrittlement in subducting plates is a candidate. An earthquake of Mw7.8 occurred at a depth of 115 km beneath Tarapaca, Chile. In this study, we suggest that the earthquake rupture can be attributed to heterogeneous fluid distribution across the subducting plate. The distribution of aftershocks suggests that the earthquake occurred on the subhorizontal fault plane. By modeling regional waveforms, we determined the spatiotemporal distribution of moment release on the fault plane, testing a different suite of velocity models and hypocenters. Two patches of high slip were robustly obtained, although their geometry tends to vary. We tested the results separately by computing the synthetic teleseismic P and pP waveforms. Observed P waveforms are generally modeled, whereas two pulses of observed pP require that the two patches are in the WNW-ESE direction. From the selected moment-release evolution, the dynamic rupture model was constructed by means of Mikumo et al. (1998). The model shows two patches of high dynamic stress drop. Notable is a region of negative stress drop between the two patches. This was required so that the region could lack wave radiation but propagate rupture from the first to the second patches. We found from teleseismic P that the radiation efficiency of the earthquake is relatively small, which can support the existence of negative stress drop during the rupture. The heterogeneous distribution of stress drop that we found can be caused by fluid. The T-P condition of dehydration explains the locations of double seismic zones (e.g. Hacker et al., 2003). The distance between the two patches of high stress drop agrees with the distance between the upper and lower layers of the double seismic zone observed in the south (Rietbrock and Waldhauser, 2004). The two patches can be parts of the double seismic zone, indicating the existence of fluid from dehydration

  3. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  4. Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods

    Directory of Open Access Journals (Sweden)

    H. Molladavoodi

    2018-01-01

    Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.

  5. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  6. Fine-Scale Spatial Heterogeneity in the Distribution of Waterborne Protozoa in a Drinking Water Reservoir.

    Science.gov (United States)

    Burnet, Jean-Baptiste; Ogorzaly, Leslie; Penny, Christian; Cauchie, Henry-Michel

    2015-09-23

    The occurrence of faecal pathogens in drinking water resources constitutes a threat to the supply of safe drinking water, even in industrialized nations. To efficiently assess and monitor the risk posed by these pathogens, sampling deserves careful design, based on preliminary knowledge on their distribution dynamics in water. For the protozoan pathogens Cryptosporidium and Giardia, only little is known about their spatial distribution within drinking water supplies, especially at fine scale. Two-dimensional distribution maps were generated by sampling cross-sections at meter resolution in two different zones of a drinking water reservoir. Samples were analysed for protozoan pathogens as well as for E. coli, turbidity and physico-chemical parameters. Parasites displayed heterogeneous distribution patterns, as reflected by significant (oo)cyst density gradients along reservoir depth. Spatial correlations between parasites and E. coli were observed near the reservoir inlet but were absent in the downstream lacustrine zone. Measurements of surface and subsurface flow velocities suggest a role of local hydrodynamics on these spatial patterns. This fine-scale spatial study emphasizes the importance of sampling design (site, depth and position on the reservoir) for the acquisition of representative parasite data and for optimization of microbial risk assessment and monitoring. Such spatial information should prove useful to the modelling of pathogen transport dynamics in drinking water supplies.

  7. The heterogeneous gas with singular interaction: generalized circular law and heterogeneous renormalized energy

    International Nuclear Information System (INIS)

    Molino, Luis Carlos García del; Pakdaman, Khashayar; Touboul, Jonathan

    2015-01-01

    We introduce and analyze d-dimensional Coulomb gases with random charge distribution and general external confining potential. We show that these gases satisfy a large-deviation principle. The analysis of the minima of the rate function (which is the leading term of the energy) reveals that, at equilibrium, the particle distribution is a generalized circular law (i.e. with spherical support but not necessarily uniform distribution). In the classical electrostatic external potential, there are infinitely many minimizers of the rate function. The most likely macroscopic configuration is a disordered distribution in which particles are uniformly distributed (for d = 2, the circular law), and charges are independent of the positions of the particles. General charge-dependent confining potentials unfold this degenerate situation: in contrast, the particle density is not uniform, and particles spontaneously organize according to their charge. In this picture the classical electrostatic potential appears as a transition at which order is lost. Sub-leading terms of the energy are derived: we show that these are related to an operator, generalizing the Coulomb renormalized energy, which incorporates the heterogeneous nature of the charges. This heterogeneous renormalized energy informs us about the microscopic arrangements of the particles, which are non-standard, strongly dependent on the charges, and include progressive and irregular lattices. (paper)

  8. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... is available. If data is unlabeled, then it is possible to generate keywords (in case of textual data) or key-patterns, as an informative representation of the obtained clusters. The methods are applied on simple artificial data sets, as well as collections of textual and medical data. In Danish: Denne ph...

  9. Heterogeneous Epidemic Model for Assessing Data Dissemination in Opportunistic Networks

    DEFF Research Database (Denmark)

    Rozanova, Liudmila; Alekseev, Vadim; Temerev, Alexander

    2014-01-01

    that amount of data transferred between network nodes possesses a Pareto distribution, implying scale-free properties. In this context, more heterogeneity in susceptibility means the less severe epidemic progression, and, on the contrary, more heterogeneity in infectivity leads to more severe epidemics...... — assuming that the other parameter (either heterogeneity or susceptibility) stays fixed. The results are general enough to be useful for estimating the epidemic progression with no significant acquired immunity — in the cases where Pareto distribution holds....

  10. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  11. Temperature dependent heterogeneous rotational correlation in lipids.

    Science.gov (United States)

    Dadashvand, Neda; Othon, Christina M

    2016-11-15

    Lipid structures exhibit complex and highly dynamic lateral structure; and changes in lipid density and fluidity are believed to play an essential role in membrane targeting and function. The dynamic structure of liquids on the molecular scale can exhibit complex transient density fluctuations. Here the lateral heterogeneity of lipid dynamics is explored in free standing lipid monolayers. As the temperature is lowered the probes exhibit increasingly broad and heterogeneous rotational correlation. This increase in heterogeneity appears to exhibit a critical onset, similar to those observed for glass forming fluids. We explore heterogeneous relaxation in in a single constituent lipid monolayer of 1, 2-dimyristoyl-sn-glycero-3-phosphocholine  by measuring the rotational diffusion of a fluorescent probe (1-palmitoyl-2-[1]-sn-glycero-3-phosphocholine), which is embedded in the lipid monolayer at low labeling density. Dynamic distributions are measured using wide-field time-resolved fluorescence anisotropy. The observed relaxation exhibits a narrow, liquid-like distribution at high temperatures (τ ∼ 2.4 ns), consistent with previous experimental measures (Dadashvand et al 2014 Struct. Dyn. 1 054701, Loura and Ramalho 2007 Biochim. Biophys. Acta 1768 467-478). However, as the temperature is quenched, the distribution broadens, and we observe the appearance of a long relaxation population (τ ∼ 16.5 ns). This supports the heterogeneity observed for lipids at high packing densities, and demonstrates that the nanoscale diffusion and reorganization in lipid structures can be significantly complex, even in the simplest amorphous architectures. Dynamical heterogeneity of this form can have a significant impact on the organization, permeability and energetics of lipid membrane structures.

  12. Requirements for the next generation of nuclear databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Pronyaev, Vladimir; Zerkin, Viktor; Muir, Douglas [International Atomic Energy Agency, Nuclear Data Section, Vienna (Austria); Winchell, David; Arcilla, Ramon [Brookhaven National Laboratory, National Nuclear Data Center, Upton, NY (United States)

    2002-08-01

    The use of relational database technology and general requirements for the next generation of nuclear databases and services are discussed. These requirements take into account an increased number of co-operating data centres working on diverse hardware and software platforms and users with different data-access capabilities. It is argued that the introduction of programming standards will allow the development of nuclear databases and data retrieval tools in a heterogeneous hardware and software environment. The functionality of this approach was tested with full-scale nuclear databases installed on different platforms having different operating and database management systems. User access through local network, internet, or CD-ROM has been investigated. (author)

  13. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes with distribution in Mexico

    Directory of Open Access Journals (Sweden)

    Gonzalo Pinilla-Buitrago

    2014-06-01

    Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first

  14. Non-invasive assessment of distribution volume ratios and binding potential: tissue heterogeneity and interindividually averaged time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)

    2004-04-01

    Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos

  15. Micro-macro model for prediction of local temperature distribution in heterogeneous and two-phase media

    Directory of Open Access Journals (Sweden)

    Furmański Piotr

    2014-09-01

    Full Text Available Heat flow in heterogeneous media with complex microstructure follows tortuous path and therefore determination of temperature distribution in them is a challenging task. Two-scales, micro-macro model of heat conduction with phase change in such media was considered in the paper. A relation between temperature distribution on the microscopic level, i.e., on the level of details of microstructure, and the temperature distribution on the macroscopic level, i.e., on the level where the properties were homogenized and treated as effective, was derived. The expansion applied to this relation allowed to obtain its more simplified, approximate form corresponding to separation of micro- and macro-scales. Then the validity of this model was checked by performing calculations for 2D microstructure of a composite made of two constituents. The range of application of the proposed micro-macro model was considered in transient states of heat conduction both for the case when the phase change in the material is present and when it is absent. Variation of the effective thermal conductivity with time was considered and a criterion was found for which application of the considered model is justified.

  16. Heterogeneous Gossip

    Science.gov (United States)

    Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien

    Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.

  17. Distributed memory in a heterogeneous network, as used in the CERN-PS complex timing system

    CERN Document Server

    Kovaltsov, V I

    1995-01-01

    The Distributed Table Manager (DTM) is a fast and efficient utility for distributing named binary data structures called Tables, of arbitrary size and structure, around a heterogeneous network of computers to a set of registered clients. The Tables are transmitted over a UDP network between DTM servers in network format, where the servers perform the conversions to and from host format for local clients. The servers provide clients with synchronization mechanisms, a choice of network data flows, and table options such as keeping table disc copies, shared memory or heap memory table allocation, table read/write permissions, and table subnet broadcasting. DTM has been designed to be easily maintainable, and to automatically recover from the type of errors typically encountered in a large control system network. The DTM system is based on a three level server daemon hierarchy, in which an inter daemon protocol handles network failures, and incorporates recovery procedures which will guarantee table consistency w...

  18. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  19. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  20. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  1. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  2. A 1998 Workshop on Heterogeneous Computing

    Science.gov (United States)

    1998-09-18

    Programming Heterogenous Computing Systems? Panel Chair: GulA. Agha, University of Illinois, Urbana -Champaign, IL, USA Modular Heterogeneous System...electrical engineering from the University of Illinois, Urbana -Champaign, in 1975. She worked at the I.B.M. T.J. Watson Research Center with the...Distributed System Environment". I Encuentro de Computaciön. Taller de Sistemas Distribuidos y Paralelos. Memorias . Queretaro, Qro. Mexico. September 1997

  3. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  4. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  5. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  6. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  7. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    Science.gov (United States)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  8. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  9. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  10. in Heterogeneous Media

    Directory of Open Access Journals (Sweden)

    Saeed Balouchi

    2013-01-01

    Full Text Available Fractured reservoirs contain about 85 and 90 percent of oil and gas resources respectively in Iran. A comprehensive study and investigation of fractures as the main factor affecting fluid flow or perhaps barrier seems necessary for reservoir development studies. High degrees of heterogeneity and sparseness of data have incapacitated conventional deterministic methods in fracture network modeling. Recently, simulated annealing (SA has been applied to generate stochastic realizations of spatially correlated fracture networks by assuming that the elastic energy of fractures follows Boltzmann distribution. Although SA honors local variability, the objective function of geometrical fracture modeling is defined for homogeneous conditions. In this study, after the introduction of SA and the derivation of the energy function, a novel technique is presented to adjust the model with highly heterogeneous data for a fractured field from the southwest of Iran. To this end, the regular object-based model is combined with a grid-based technique to cover the heterogeneity of reservoir properties. The original SA algorithm is also modified by being constrained in different directions and weighting the energy function to make it appropriate for heterogeneous conditions. The simulation results of the presented approach are in good agreement with the observed field data.

  11. Heterogeneous Beliefs, Public Information, and Option Markets

    DEFF Research Database (Denmark)

    Qin, Zhenjiang

    In an incomplete market setting with heterogeneous prior beliefs, I show that public information and strike price of option have substantial infl‡uence on asset pricing in option markets, by investigating an absolute option pricing model with negative exponential utility investors and normally...... distributed dividend. I demonstrate that heterogeneous prior variances give rise to the economic value of option markets. Investors speculate in option market and public information improves allocational efficiency of markets only when there is heterogeneity in prior variance. Heterogeneity in mean is neither...... a necessary nor sufficient condition for generating speculations in option markets. With heterogeneous beliefs, options are non-redundant assets which can facilitate side-betting and enable investors to take advantage of the disagreements and the differences in con…dence. This fact leads to a higher growth...

  12. Land surface temperature representativeness in a heterogeneous area through a distributed energy-water balance model and remote sensing data

    Directory of Open Access Journals (Sweden)

    C. Corbari

    2010-10-01

    Full Text Available Land surface temperature is the link between soil-vegetation-atmosphere fluxes and soil water content through the energy water balance. This paper analyses the representativeness of land surface temperature (LST for a distributed hydrological water balance model (FEST-EWB using LST from AHS (airborne hyperspectral scanner, with a spatial resolution between 2–4 m, LST from MODIS, with a spatial resolution of 1000 m, and thermal infrared radiometric ground measurements that are compared with the representative equilibrium temperature that closes the energy balance equation in the distributed hydrological model.

    Diurnal and nocturnal images are analyzed due to the non stable behaviour of the thermodynamic temperature and to the non linear effects induced by spatial heterogeneity.

    Spatial autocorrelation and scale of fluctuation of land surface temperature from FEST-EWB and AHS are analysed at different aggregation areas to better understand the scale of representativeness of land surface temperature in a hydrological process.

    The study site is the agricultural area of Barrax (Spain that is a heterogeneous area with a patchwork of irrigated and non irrigated vegetated fields and bare soil. The used data set was collected during a field campaign from 10 to 15 July 2005 in the framework of the SEN2FLEX project.

  13. The Brainomics/Localizer database.

    Science.gov (United States)

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  15. The mining of toxin-like polypeptides from EST database by single residue distribution analysis.

    Science.gov (United States)

    Kozlov, Sergey; Grishin, Eugene

    2011-01-31

    Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  16. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  17. Imperfect repair and lifesaving in heterogeneous populations

    Energy Technology Data Exchange (ETDEWEB)

    Finkelstein, Maxim [Department of Mathematical Statistics, University of the Free State, PO Box 339, 9300 Bloemfontein (South Africa) and Max Planck Institute for Demographic Research, Rostock (Germany)]. E-mail: FinkelM.SCl@mail.uovs.ac.za

    2007-12-15

    In this theoretical paper we generalize the notion of minimal repair to the heterogeneous case, when the lifetime distribution function can be modeled by continuous or a discrete mixture of distributions. The statistical (black box) minimal repair and the minimal repair based on information just before the failure of an object are considered. The corresponding failure (intensity) rate processes are defined and analyzed. Demographic lifesaving model is also considered: each life is saved (cured) with some probability (or equivalently a proportion of individuals who would have died are now resuscitated and given another chance). Those who are saved experience the statistical minimal repair. Both of these models are based on the Poisson or non-homogeneous Poisson processes of underlying events, which allow for considering heterogeneity. We also consider the new model of imperfect repair in the homogeneous case and present generalizations to the heterogeneous setting.

  18. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  19. Fiber Bundle Model Under Heterogeneous Loading

    Science.gov (United States)

    Roy, Subhadeep; Goswami, Sanchari

    2018-03-01

    The present work deals with the behavior of fiber bundle model under heterogeneous loading condition. The model is explored both in the mean-field limit as well as with local stress concentration. In the mean field limit, the failure abruptness decreases with increasing order k of heterogeneous loading. In this limit, a brittle to quasi-brittle transition is observed at a particular strength of disorder which changes with k. On the other hand, the model is hardly affected by such heterogeneity in the limit where local stress concentration plays a crucial role. The continuous limit of the heterogeneous loading is also studied and discussed in this paper. Some of the important results related to fiber bundle model are reviewed and their responses to our new scheme of heterogeneous loading are studied in details. Our findings are universal with respect to the nature of the threshold distribution adopted to assign strength to an individual fiber.

  20. Optimization of an algorithm for 3D calculation of radiation dose distribution in heterogeneous media for use in radiotherapy planning

    International Nuclear Information System (INIS)

    Perles, L.A.; Chinellato, C.D.; Rocha, J.R.O.

    2001-01-01

    In this paper has been presented a modification of a algorithm for three-dimensional (3D) radiation dose distribution in heterogeneous media by convolutions. This modification has maintained good accordance between calculated and simulated data in EGS4 code. The results of algorithm have been compared with commercial program PLATO, where have been noticed inconsistency for equivalent density regions in a muscle-lung-muscle interface system

  1. From inter-specific behavioural interactions to species distribution patterns along gradients of habitat heterogeneity.

    Science.gov (United States)

    Laiolo, Paola

    2013-01-01

    The strength of the behavioural processes associated with competitor coexistence may vary when different physical environments, and their biotic communities, come into contact, although empirical evidence of how interference varies across gradients of environmental complexity is still scarce in vertebrates. Here, I analyse how behavioural interactions and habitat selection regulate the local distribution of steppeland larks (Alaudidae) in a gradient from simple to heterogeneous agricultural landscapes in Spain, using crested lark Galerida cristata and Thekla lark G. theklae as study models. Galerida larks significantly partitioned by habitat but frequently co-occurred in heterogeneous environments. Irrespective of habitat divergence, however, the local densities of the two larks were negatively correlated, and the mechanisms beyond this pattern were investigated by means of playback experiments. When simulating the intrusion of the congener by broadcasting the species territorial calls, both larks responded with an aggressive response as intense with respect to warning and approach behaviour as when responding to the intrusion of a conspecific. However, birds promptly responded to playbacks only when congener territories were nearby, a phenomenon that points to learning as the mechanisms through which individuals finely tune their aggressive responses to the local competition levels. Heterospecifics occurred in closer proximity in diverse agro-ecosystems, possibly because of more abundant or diverse resources, and here engage in antagonistic interactions. The drop of species diversity associated with agricultural homogenisation is therefore likely to also bring about the disappearance of the behavioural repertoires associated with species interactions.

  2. A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2017-04-01

    Full Text Available With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data.

  3. Job Heterogeneity and Coordination Frictions

    DEFF Research Database (Denmark)

    Kennes, John; le Maire, Daniel

    We develop a new directed search model of a frictional labor market with a continuum of heterogenous workers and firms. We estimate two versions of the model - auction and price posting - using Danish data on wages and productivities. Assuming heterogenous workers with no comparative advantage, we...... the job ladder, how the identification of assortative matching is fundamentally different in directed and undirected search models, how our theory accounts for business cycle facts related to inter-temporal changes in job offer distributions, and how our model could also be used to identify...

  4. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    CERN Document Server

    Murazaki, M; Uno, Y

    2003-01-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of +-13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, alpha, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and sup 3 He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, ...

  5. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  6. Engineering the object-relation database model in O-Raid

    Science.gov (United States)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  7. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  8. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  9. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  10. Influence of initial heterogeneities and recharge limitations on the evolution of aperture distributions in carbonate aquifers

    Directory of Open Access Journals (Sweden)

    B. Hubinger

    2011-12-01

    Full Text Available Karst aquifers evolve where the dissolution of soluble rocks causes the enlargement of discrete pathways along fractures or bedding planes, thus creating highly conductive solution conduits. To identify general interrelations between hydrogeological conditions and the properties of the evolving conduit systems the aperture-size frequency distributions resulting from generic models of conduit evolution are analysed. For this purpose, a process-based numerical model coupling flow and rock dissolution is employed. Initial protoconduits are represented by tubes with log-normally distributed aperture sizes with a mean μ0 = 0.5 mm for the logarithm of the diameters. Apertures are spatially uncorrelated and widen up to the metre range due to dissolution by chemically aggressive waters. Several examples of conduit development are examined focussing on influences of the initial heterogeneity and the available amount of recharge. If the available recharge is sufficiently high the evolving conduits compete for flow and those with large apertures and high hydraulic gradients attract more and more water. As a consequence, the positive feedback between increasing flow and dissolution causes the breakthrough of a conduit pathway connecting the recharge and discharge sides of the modelling domain. Under these competitive flow conditions dynamically stable bimodal aperture distributions are found to evolve, i.e. a certain percentage of tubes continues to be enlarged while the remaining tubes stay small-sized. The percentage of strongly widened tubes is found to be independent of the breakthrough time and decreases with increasing heterogeneity of the initial apertures and decreasing amount of available water. If the competition for flow is suppressed because the availability of water is strongly limited breakthrough of a conduit pathway is inhibited and the conduit pathways widen very slowly. The resulting aperture distributions are found to be

  11. Carrying capacity in a heterogeneous environment with habitat connectivity.

    Science.gov (United States)

    Zhang, Bo; Kula, Alex; Mack, Keenan M L; Zhai, Lu; Ryce, Arrix L; Ni, Wei-Ming; DeAngelis, Donald L; Van Dyken, J David

    2017-09-01

    A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments. © 2017 John Wiley & Sons Ltd/CNRS.

  12. Individual vision and peak distribution in collective actions

    Science.gov (United States)

    Lu, Peng

    2017-06-01

    People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.

  13. Heterogeneity, learning and information stickiness in inflation expectations

    DEFF Research Database (Denmark)

    Pfajfar, Damjan; Santoro, Emiliano

    2010-01-01

    In this paper we propose novel techniques for the empirical analysis of adaptive learning and sticky information in inflation expectations. These methodologies are applied to the distribution of households’ inflation expectations collected by the University of Michigan Survey Research Center....... To account for the evolution of the cross-section of inflation forecasts over time and measure the degree of heterogeneity in private agents’ forecasts, we explore time series of percentiles from the empirical distribution. Our results show that heterogeneity is pervasive in the process of inflation...... hand side of the median formed in accordance with adaptive learning and sticky information....

  14. Strategic Voting in Heterogeneous Electorates: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Marcelo Tyszler

    2013-11-01

    Full Text Available We study strategic voting in a setting where voters choose from three options and Condorcet cycles may occur. We introduce in the electorate heterogeneity in preference intensity by allowing voters to differ in the extent to which they value the three options. Three information conditions are tested: uninformed, in which voters know only their own preference ordering and the own benefits from each option; aggregate information, in which in addition they know the aggregate realized distribution of the preference orderings and full information, in which they also know how the relative importance attributed to the options are distributed within the electorate. As a general result, heterogeneity seems to decrease the level of strategic voting in our experiment compared to the homogenous preference case that we study in a companion paper. Both theoretically and empirically (with data collected in a laboratory experiment, the main comparative static results obtained for the homogenous case carry over to the present setting with preference heterogeneity. Moreover, information about the realized aggregate distribution of preferences seems to be the element that best explains observed differences in voting behavior. Additional information about the realized distribution of preference intensity does not yield significant further changes.

  15. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Heterogeneity of long-history migration predicts emotion recognition accuracy.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Niedenthal, Paula M

    2016-06-01

    Recent work (Rychlowska et al., 2015) demonstrated the power of a relatively new cultural dimension, historical heterogeneity, in predicting cultural differences in the endorsement of emotion expression norms. Historical heterogeneity describes the number of source countries that have contributed to a country's present-day population over the last 500 years. People in cultures originating from a large number of source countries may have historically benefited from greater and clearer emotional expressivity, because they lacked a common language and well-established social norms. We therefore hypothesized that in addition to endorsing more expressive display rules, individuals from heterogeneous cultures will also produce facial expressions that are easier to recognize by people from other cultures. By reanalyzing cross-cultural emotion recognition data from 92 papers and 82 cultures, we show that emotion expressions of people from heterogeneous cultures are more easily recognized by observers from other cultures than are the expressions produced in homogeneous cultures. Heterogeneity influences expression recognition rates alongside the individualism-collectivism of the perceivers' culture, as more individualistic cultures were more accurate in emotion judgments than collectivistic cultures. This work reveals the present-day behavioral consequences of long-term historical migration patterns and demonstrates the predictive power of historical heterogeneity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  18. Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Karavakis, E. [CERN; Lammel, S. [Fermilab; Wildish, T. [Princeton U.

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then. In this paper we discuss the functionality and our experience of system deployment and operation on the full CMS scale.

  19. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  20. Dynamical links between small- and large-scale mantle heterogeneity: Seismological evidence

    Science.gov (United States)

    Frost, Daniel A.; Garnero, Edward J.; Rost, Sebastian

    2018-01-01

    We identify PKP • PKP scattered waves (also known as P‧ •P‧) from earthquakes recorded at small-aperture seismic arrays at distances less than 65°. P‧ •P‧ energy travels as a PKP wave through the core, up into the mantle, then scatters back down through the core to the receiver as a second PKP. P‧ •P‧ waves are unique in that they allow scattering heterogeneities throughout the mantle to be imaged. We use array-processing methods to amplify low amplitude, coherent scattered energy signals and resolve their incoming direction. We deterministically map scattering heterogeneity locations from the core-mantle boundary to the surface. We use an extensive dataset with sensitivity to a large volume of the mantle and a location method allowing us to resolve and map more heterogeneities than have previously been possible, representing a significant increase in our understanding of small-scale structure within the mantle. Our results demonstrate that the distribution of scattering heterogeneities varies both radially and laterally. Scattering is most abundant in the uppermost and lowermost mantle, and a minimum in the mid-mantle, resembling the radial distribution of tomographically derived whole-mantle velocity heterogeneity. We investigate the spatial correlation of scattering heterogeneities with large-scale tomographic velocities, lateral velocity gradients, the locations of deep-seated hotspots and subducted slabs. In the lowermost 1500 km of the mantle, small-scale heterogeneities correlate with regions of low seismic velocity, high lateral seismic gradient, and proximity to hotspots. In the upper 1000 km of the mantle there is no significant correlation between scattering heterogeneity location and subducted slabs. Between 600 and 900 km depth, scattering heterogeneities are more common in the regions most remote from slabs, and close to hotspots. Scattering heterogeneities show an affinity for regions close to slabs within the upper 200 km of the

  1. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  2. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  3. Spatially correlated heterogeneous aspirations to enhance network reciprocity

    Science.gov (United States)

    Tanimoto, Jun; Nakata, Makoto; Hagishima, Aya; Ikegaya, Naoki

    2012-02-01

    Perc & Wang demonstrated that aspiring to be the fittest under conditions of pairwise strategy updating enhances network reciprocity in structured populations playing 2×2 Prisoner's Dilemma games (Z. Wang, M. Perc, Aspiring to the fittest and promoted of cooperation in the Prisoner's Dilemma game, Physical Review E 82 (2010) 021115; M. Perc, Z. Wang, Heterogeneous aspiration promotes cooperation in the Prisoner's Dilemma game, PLOS one 5 (12) (2010) e15117). Through numerical simulations, this paper shows that network reciprocity is even greater if heterogeneous aspirations are imposed. We also suggest why heterogeneous aspiration fosters network reciprocity. It distributes strategy updating speed among agents in a manner that fortifies the initially allocated cooperators' clusters against invasion. This finding prompted us to further enhance the usual heterogeneous aspiration cases for heterogeneous network topologies. We find that a negative correlation between degree and aspiration level does extend cooperation among heterogeneously structured agents.

  4. System of and method for transparent management of data objects in containers across distributed heterogenous resources

    Science.gov (United States)

    Moore, Reagan W.; Rajasekar, Arcot; Wan, Michael Y.

    2007-09-11

    A system of and method for maintaining data objects in containers across a network of distributed heterogeneous resources in a manner which is transparent to a client. A client request pertaining to containers is resolved by querying meta data for the container, processing the request through one or more copies of the container maintained on the system, updating the meta data for the container to reflect any changes made to the container as a result processing the re quest, and, if a copy of the container has changed, changing the status of the copy to indicate dirty status or synchronizing the copy to one or more other copies that may be present on the system.

  5. Percolation in Heterogeneous Media

    International Nuclear Information System (INIS)

    Vocka, Radim

    1999-01-01

    This work is a theoretical reflection on the problematic of the modeling of heterogeneous media, that is on the way of their simple representation conserving their characteristic features. Two particular problems are addressed in this thesis. Firstly, we study the transport in porous media, that is in a heterogeneous media which structure is quenched. A pore space is represented in a simple way - a pore is symbolized as a tube of a given length and a given diameter. The fact that the correlations in the distribution of pore sizes are taken into account by a construction of a hierarchical network makes possible the modeling of porous media with a porosity distributed over several length scales. The transport in the hierarchical network shows qualitatively different phenomena from those observed in simpler models. A comparison of numerical results with experimental data shows that the hierarchical network gives a good qualitative representation of the structure of real porous media. Secondly, we study a problem of the transport in a heterogeneous media which structure is evolving during the time. The models where the evolution of the structure is not influenced by the transport are studied in detail. These models present a phase transition of the same nature as that observed on the percolation networks. We propose a new theoretical description of this transition, and we express critical exponents describing the evolution of the conductivity as a function of fundamental exponents of percolation theory. (author) [fr

  6. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  7. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  8. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  9. Heterogeneous cores for fast breeder reactor

    International Nuclear Information System (INIS)

    Schroeder, R.; Spenke, H.

    1980-01-01

    Firstly, the motivation for heterogeneous cores is discussed. This is followed by an outline of two reactor designs, both of which are variants of the combined ring and island core. These designs are presented by means of figures and detailed tables. Subsequently, a description of two international projects at fast critical zero energy facilities is given. Both of them support the nuclear design of heterogeneous cores. In addition to a survey of these projects, a typical experiment is discussed: the measurement of rate distributions. (orig.) [de

  10. Heterogeneous characters modeling of instant message services users' online behavior.

    Science.gov (United States)

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  11. Impact of mechanical heterogeneity on joint density in a welded ignimbrite

    Science.gov (United States)

    Soden, A. M.; Lunn, R. J.; Shipton, Z. K.

    2016-08-01

    Joints are conduits for groundwater, hydrocarbons and hydrothermal fluids. Robust fluid flow models rely on accurate characterisation of joint networks, in particular joint density. It is generally assumed that the predominant factor controlling joint density in layered stratigraphy is the thickness of the mechanical layer where the joints occur. Mechanical heterogeneity within the layer is considered a lesser influence on joint formation. We analysed the frequency and distribution of joints within a single 12-m thick ignimbrite layer to identify the controls on joint geometry and distribution. The observed joint distribution is not related to the thickness of the ignimbrite layer. Rather, joint initiation, propagation and termination are controlled by the shape, spatial distribution and mechanical properties of fiamme, which are present within the ignimbrite. The observations and analysis presented here demonstrate that models of joint distribution, particularly in thicker layers, that do not fully account for mechanical heterogeneity are likely to underestimate joint density, the spatial variability of joint distribution and the complex joint geometries that result. Consequently, we recommend that characterisation of a layer's compositional and material properties improves predictions of subsurface joint density in rock layers that are mechanically heterogeneous.

  12. Privacy and Security Research Group workshop on network and distributed system security: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System. Selected papers were processed separately for inclusion in the Energy Science and Technology Database.

  13. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible i...

  14. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Murazaki, Minoru; Uno, Yuichi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of {+-}13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, {alpha}, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and {sup 3}He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, calculated value agreed well with measurement data of PSPC without Cd cover. (author)

  15. MONTE CARLO SIMULATIONS OF THE ADSORPTION OF DIMERS ON STRUCTURED HETEROGENEOUS SURFACES

    Directory of Open Access Journals (Sweden)

    Abreu C.R.A.

    2001-01-01

    Full Text Available The effect of surface topography upon the adsorption of dimer molecules is analyzed by means of grand canonical ensemble Monte Carlo simulations. Heterogeneous surfaces were assumed to consist of a square lattice containing active sites with two different energies. These were distributed in three different configurations: a random distribution of isolated sites; a random distribution of grains with four high-energy sites; and a random distribution of grains with nine high-energy sites. For the random distribution of isolated sites, the results are in good agreement with the molecular simulations performed by Nitta et al. (1997. In general, the comparison with theoretical models shows that the Nitta et al. (1984 isotherm presents good predictions of dimer adsorption both on homogeneous and heterogeneous surfaces with sites having small differences in characteristic energies. The molecular simulation results also show that the energy topology of the solid surfaces plays an important role in the adsorption of dimers on solids with large differences in site energies. For these cases, the Nitta et al. model does not describe well the data on dimer adsorption on random heterogeneous surfaces (grains with one acid site, but does describe reasonably well the adsorption of dimers on more patchwise heterogeneous surfaces (grains with nine acid sites.

  16. In vivo study on influence of the heterogeneity of tissues in the dose distribution in high energy X ray therapy

    International Nuclear Information System (INIS)

    Aldred, M.A.

    1987-01-01

    Several authors investigated the effect of the heterogeneity of tissue in the dose distribution in a radiation-therapy. Practically all of them carried out ''in vitro'' measurements using a solid body immersed in a water phantom, in order to simulate the inhomogeneity, such as bone, air cavity, etc. In the present work, ''in vivo'' measurements were performed utilizing thermoluminescent dosimeters, whose appropriateness and convenience are well known. Eight patients at Instituto de Radioterapia Oswaldo Cruz were selected, that were under irradiation treatments in their pelvic region. The ratio between body entry radiation dose and the corresponding exit dose, when compared to the same ratio for a homogeneous phantom, gives the influence of the heterogeneity of the tissue the radiation crosses. The results found in those eight patients have shown that ''in vivo'' measurements present a ratio about 8% smaller that in homogeneous phantom case. (author) [pt

  17. A Metric and Workflow for Quality Control in the Analysis of Heterogeneity in Phenotypic Profiles and Screens

    Science.gov (United States)

    Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark

    2016-01-01

    Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a

  18. The CTTC 5G End-to-End Experimental Platform : Integrating Heterogeneous Wireless/Optical Networks, Distributed Cloud, and IoT Devices

    OpenAIRE

    Munoz, Raul; Mangues-Bafalluy, Josep; Vilalta, Ricard; Verikoukis, Christos; Alonso-Zarate, Jesus; Bartzoudis, Nikolaos; Georgiadis, Apostolos; Payaro, Miquel; Perez-Neira, Ana; Casellas, Ramon; Martinez, Ricardo; Nunez-Martinez, Jose; Requena Esteso, Manuel; Pubill, David; Font-Bach, Oriol

    2016-01-01

    The Internet of Things (IoT) will facilitate a wide variety of applications in different domains, such as smart cities, smart grids, industrial automation (Industry 4.0), smart driving, assistance of the elderly, and home automation. Billions of heterogeneous smart devices with different application requirements will be connected to the networks and will generate huge aggregated volumes of data that will be processed in distributed cloud infrastructures. On the other hand, there is also a gen...

  19. Impacts of Streambed Heterogeneity and Anisotropy on Residence Time of Hyporheic Zone.

    Science.gov (United States)

    Liu, Suning; Chui, Ting Fong May

    2018-05-01

    The hyporheic zone (HZ), which is the region beneath or alongside a streambed, plays an important role in the stream's ecology. The duration that a water molecule or a solute remains within the HZ, or residence time (RT), is one of the most common metrics used to evaluate the function of the HZ. The RT is greatly influenced by the streambed's hydraulic conductivity (K), which is intrinsically difficult to characterize due to its heterogeneity and anisotropy. Many laboratory and numerical studies of the HZ have simplified the streambed K to a constant, thus producing RT values that may differ from those gathered from the field. Some studies have considered the heterogeneity of the HZ, but very few have accounted for anisotropy or the natural K distributions typically found in real streambeds. This study developed numerical models in MODFLOW to examine the influence of heterogeneity and anisotropy, and that of the natural K distribution in a streambed, on the RT of the HZ. Heterogeneity and anisotropy were both found to shorten the mean and median RTs while increasing the range of the RTs. Moreover, heterogeneous K fields arranged in a more orderly pattern had longer RTs than those with random K distributions. These results could facilitate the design of streambed K values and distributions to achieve the desired RT during river restoration. They could also assist the translation of results from the more commonly considered homogeneous and/or isotropic conditions into heterogeneous and anisotropic field situations. © 2017, National Ground Water Association.

  20. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  1. Acknowledging patient heterogeneity in economic evaluation : a systematic literature review.

    Science.gov (United States)

    Grutters, Janneke P C; Sculpher, Mark; Briggs, Andrew H; Severens, Johan L; Candel, Math J; Stahl, James E; De Ruysscher, Dirk; Boer, Albert; Ramaekers, Bram L T; Joore, Manuela A

    2013-02-01

    Patient heterogeneity is the part of variability that can be explained by certain patient characteristics (e.g. age, disease stage). Population reimbursement decisions that acknowledge patient heterogeneity could potentially save money and increase population health. To date, however, economic evaluations pay only limited attention to patient heterogeneity. The objective of the present paper is to provide a comprehensive overview of the current knowledge regarding patient heterogeneity within economic evaluation of healthcare programmes. A systematic literature review was performed to identify methodological papers on the topic of patient heterogeneity in economic evaluation. Data were obtained using a keyword search of the PubMed database and manual searches. Handbooks were also included. Relevant data were extracted regarding potential sources of patient heterogeneity, in which of the input parameters of an economic evaluation these occur, methods to acknowledge patient heterogeneity and specific concerns associated with this acknowledgement. A total of 20 articles and five handbooks were included. The relevant sources of patient heterogeneity (demographics, preferences and clinical characteristics) and the input parameters where they occurred (baseline risk, treatment effect, health state utility and resource utilization) were combined in a framework. Methods were derived for the design, analysis and presentation phases of an economic evaluation. Concerns related mainly to the danger of false-positive results and equity issues. By systematically reviewing current knowledge regarding patient heterogeneity within economic evaluations of healthcare programmes, we provide guidance for future economic evaluations. Guidance is provided on which sources of patient heterogeneity to consider, how to acknowledge them in economic evaluation and potential concerns. The improved acknowledgement of patient heterogeneity in future economic evaluations may well improve the

  2. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  3. Modeling Transport of Cesium in Grimsel Granodiorite With Micrometer Scale Heterogeneities and Dynamic Update of Kd

    Science.gov (United States)

    Voutilainen, Mikko; Kekäläinen, Pekka; Siitari-Kauppi, Marja; Sardini, Paul; Muuri, Eveliina; Timonen, Jussi; Martin, Andrew

    2017-11-01

    Transport and retardation of cesium in Grimsel granodiorite taking into account heterogeneity of mineral and pore structure was studied using rock samples overcored from an in situ diffusion test at the Grimsel Test Site. The field test was part of the Long-Term Diffusion (LTD) project designed to characterize retardation properties (diffusion and distribution coefficients) under in situ conditions. Results of the LTD experiment for cesium showed that in-diffusion profiles and spatial concentration distributions were strongly influenced by the heterogeneous pore structure and mineral distribution. In order to study the effect of heterogeneity on the in-diffusion profile and spatial concentration distribution, a Time Domain Random Walk (TDRW) method was applied along with a feature for modeling chemical sorption in geological materials. A heterogeneous mineral structure of Grimsel granodiorite was constructed using X-ray microcomputed tomography (X-μCT) and the map was linked to previous results for mineral specific porosities and distribution coefficients (Kd) that were determined using C-14-PMMA autoradiography and batch sorption experiments, respectively. After this the heterogeneous structure contains information on local porosity and Kd in 3-D. It was found that the heterogeneity of the mineral structure on the micrometer scale affects significantly the diffusion and sorption of cesium in Grimsel granodiorite at the centimeter scale. Furthermore, the modeled in-diffusion profiles and spatial concentration distributions show similar shape and pattern to those from the LTD experiment. It was concluded that the use of detailed structure characterization and quantitative data on heterogeneity can significantly improve the interpretation and evaluation of transport experiments.

  4. Multiscale Investigation on Biofilm Distribution and Its Impact on Macroscopic Biogeochemical Reaction Rates: BIOFILM DISTRIBUTION AND RATE SCALING

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Zhifeng [Institute of Surface-Earth System Science, Tianjin University, Tianjin China; Pacific Northwest National Laboratory, Richland WA USA; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland WA USA; School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen China; Liu, Yuanyuan [Pacific Northwest National Laboratory, Richland WA USA; School of Earth Science and Engineering, Nanjing University, Nanjing China; Bailey, Vanessa L. [Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models, and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical properties was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.

  5. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  6. Effects of fault heterogeneity on seismic energy and spectrum

    Science.gov (United States)

    Dragoni, Michele; Santini, Stefano

    2017-12-01

    We study the effects of friction heterogeneity on the dynamics of a seismogenic fault. To this aim, we consider a fault model containing two asperities with different static frictions and a rate-dependent dynamic friction. We consider the seismic events produced by the consecutive failure of the two asperities and study their properties as functions of the ratio between static frictions. In particular, we calculate the moment rate, the stress evolution during fault slip, the average stress drop, the partitioning of energy release, the seismic energy, the far-field waveforms and the spectrum of seismic waves. These quantities depend to various extent on the friction distribution on the fault. In particular, the stress distribution on the fault is always strongly heterogeneous at the beginning of the seismic event. Seismic energy and frictional heat decrease with increasing friction heterogeneity, while seismic efficiency is constant. We obtain an equation relating seismic efficiency to the parameters of the friction law, showing that the efficiency is maximum for smaller values of dynamic friction. The seismic spectrum depends on the friction distribution as to the positions and the values of the minima. However, under the model assumption that the slip durations are the same for both asperities, the corner frequency is independent of the friction distribution, but it depends on the friction law and on the coupling between asperities. The model provides a relation between the total radiated energy and the seismic moment that is consistent with the empirical relation between the two quantities. The fault model with one asperity is also considered as a particular case. The model is applied to the 1965 Rat Islands (Alaska) earthquake and shows the role of fault heterogeneity in controlling the spatial distribution of stress drop as well as the time dependence and the final amount of radiated energy.

  7. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    Energy Technology Data Exchange (ETDEWEB)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D. [IRSN/DRPH/SDI/LEDI, BP 17, F-92 262 Fontenay-aux-Roses (France); Dudoignon, N. [IRSN/DRPH/SRBE/LRPAT, BP 17, F-92 262 Fontenay-aux-Roses (France); Rateau, S.; Van der Meeren, A.; Rouit, E. [CEA/DSV/DRR/SRCA/LRT BP no 12, F-91680 Bruyeres-le-Chatel (France); Bottlaender, M. [CEA/SHFJ, 4, place du General Leclerc F-91400 Orsay (France)

    2006-07-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  8. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    International Nuclear Information System (INIS)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D.; Dudoignon, N.; Rateau, S.; Van der Meeren, A.; Rouit, E.; Bottlaender, M.

    2006-01-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  9. Statistical Analysis of Spatiotemporal Heterogeneity of the Distribution of Air Quality and Dominant Air Pollutants and the Effect Factors in Qingdao Urban Zones

    Directory of Open Access Journals (Sweden)

    Xiangwei Zhao

    2018-04-01

    Full Text Available Air pollution has impacted people’s lives in urban China, and the analysis of the distribution and driving factors behind air quality has become a current research focus. In this study, the temporal heterogeneity of air quality (AQ and the dominant air pollutants across the four seasons were analyzed based on the Kruskal-Wallis rank-sum test method. Then, the spatial heterogeneity of AQ and the dominant air pollutants across four sites were analyzed based on the Wilcoxon signed-rank test method. Finally, the copula model was introduced to analyze the effect of relative factors on dominant air pollutants. The results show that AQ and dominant air pollutants present significant spatiotemporal heterogeneity in the study area. AQ is worst in winter and best in summer. PM10, O3, and PM2.5 are the dominant air pollutants in spring, summer, and winter, respectively. The average concentration of dominant air pollutants presents significant and diverse daily peaks and troughs across the four sites. The main driving factors are pollutants such as SO2, NO2, and CO, so pollutant emission reduction is the key to improving air quality. Corresponding pollution control measures should account for this heterogeneity in terms of AQ and the dominant air pollutants among different urban zones.

  10. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    Science.gov (United States)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  11. Heterogeneous characters modeling of instant message services users' online behavior.

    Directory of Open Access Journals (Sweden)

    Hongyan Cui

    Full Text Available Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  12. Heterogeneous characters modeling of instant message services users’ online behavior

    Science.gov (United States)

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  13. Spatial heterogeneity study of vegetation coverage at Heihe River Basin

    Science.gov (United States)

    Wu, Lijuan; Zhong, Bo; Guo, Liyu; Zhao, Xiangwei

    2014-11-01

    Spatial heterogeneity of the animal-landscape system has three major components: heterogeneity of resource distributions in the physical environment, heterogeneity of plant tissue chemistry, heterogeneity of movement modes by the animal. Furthermore, all three different types of heterogeneity interact each other and can either reinforce or offset one another, thereby affecting system stability and dynamics. In previous studies, the study areas are investigated by field sampling, which costs a large amount of manpower. In addition, uncertain in sampling affects the quality of field data, which leads to unsatisfactory results during the entire study. In this study, remote sensing data is used to guide the sampling for research on heterogeneity of vegetation coverage to avoid errors caused by randomness of field sampling. Semi-variance and fractal dimension analysis are used to analyze the spatial heterogeneity of vegetation coverage at Heihe River Basin. The spherical model with nugget is used to fit the semivariogram of vegetation coverage. Based on the experiment above, it is found, (1)there is a strong correlation between vegetation coverage and distance of vegetation populations within the range of 0-28051.3188m at Heihe River Basin, but the correlation loses suddenly when the distance greater than 28051.3188m. (2)The degree of spatial heterogeneity of vegetation coverage at Heihe River Basin is medium. (3)Spatial distribution variability of vegetation occurs mainly on small scales. (4)The degree of spatial autocorrelation is 72.29% between 25% and 75%, which means that spatial correlation of vegetation coverage at Heihe River Basin is medium high.

  14. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  15. Synchronization in networks with heterogeneous coupling delays

    Science.gov (United States)

    Otto, Andreas; Radons, Günter; Bachrathy, Dániel; Orosz, Gábor

    2018-01-01

    Synchronization in networks of identical oscillators with heterogeneous coupling delays is studied. A decomposition of the network dynamics is obtained by block diagonalizing a newly introduced adjacency lag operator which contains the topology of the network as well as the corresponding coupling delays. This generalizes the master stability function approach, which was developed for homogenous delays. As a result the network dynamics can be analyzed by delay differential equations with distributed delay, where different delay distributions emerge for different network modes. Frequency domain methods are used for the stability analysis of synchronized equilibria and synchronized periodic orbits. As an example, the synchronization behavior in a system of delay-coupled Hodgkin-Huxley neurons is investigated. It is shown that the parameter regions where synchronized periodic spiking is unstable expand when increasing the delay heterogeneity.

  16. Exploiting Distributed, Heterogeneous and Sensitive Data Stocks while Maintaining the Owner's Data Sovereignty.

    Science.gov (United States)

    Lablans, M; Kadioglu, D; Muscholl, M; Ückert, F

    2015-01-01

    To achieve statistical significance in medical research, biological or data samples from several bio- or databanks often need to be complemented by those of other institutions. For that purpose, IT-based search services have been established to locate datasets matching a given set of criteria in databases distributed across several institutions. However, previous approaches require data owners to disclose information about their samples, raising a barrier for their participation in the network. To devise a method to search distributed databases for datasets matching a given set of criteria while fully maintaining their owner's data sovereignty. As a modification to traditional federated search services, we propose the decentral search, which allows the data owner a high degree of control. Relevant data are loaded into local bridgeheads, each under their owner's sovereignty. Researchers can formulate criteria sets along with a project proposal using a central search broker, which then notifies the bridgeheads. The criteria are, however, treated as an inquiry rather than a query: Instead of responding with results, bridgeheads notify their owner and wait for his/her decision regarding whether and what to answer based on the criteria set, the matching datasets and the specific project proposal. Without the owner's explicit consent, no data leaves his/her institution. The decentral search has been deployed in one of the six German Centers for Health Research, comprised of eleven university hospitals. In the process, compliance with German data protection regulations has been confirmed. The decentral search also marks the centerpiece of an open source registry software toolbox aiming to build a national registry of rare diseases in Germany. While the sacrifice of real-time answers impairs some use-cases, it leads to several beneficial side effects: improved data protection due to data parsimony, tolerance for incomplete data schema mappings and flexibility with regard

  17. Comparative analysis of perioperative complications between a multicenter prospective cervical deformity database and the Nationwide Inpatient Sample database.

    Science.gov (United States)

    Passias, Peter G; Horn, Samantha R; Jalai, Cyrus M; Poorman, Gregory; Bono, Olivia J; Ramchandran, Subaraman; Smith, Justin S; Scheer, Justin K; Sciubba, Daniel M; Hamilton, D Kojo; Mundis, Gregory; Oh, Cheongeun; Klineberg, Eric O; Lafage, Virginie; Shaffrey, Christopher I; Ames, Christopher P

    2017-11-01

    Complication rates for adult cervical deformity are poorly characterized given the complexity and heterogeneity of cases. To compare perioperative complication rates following adult cervical deformity corrective surgery between a prospective multicenter database for patients with cervical deformity (PCD) and the Nationwide Inpatient Sample (NIS). Retrospective review of prospective databases. A total of 11,501 adult patients with cervical deformity (11,379 patients from the NIS and 122 patients from the PCD database). Perioperative medical and surgical complications. The NIS was queried (2001-2013) for cervical deformity discharges for patients ≥18 years undergoing cervical fusions using International Classification of Disease, Ninth Revision (ICD-9) coding. Patients ≥18 years from the PCD database (2013-2015) were selected. Equivalent complications were identified and rates were compared. Bonferroni correction (pdatabases. A total of 11,379 patients from the NIS database and 122 patiens from the PCD database were identified. Patients from the PCD database were older (62.49 vs. 55.15, pdatabase. The PCD database had an increased risk of reporting overall complications than the NIS (odds ratio: 2.81, confidence interval: 1.81-4.38). Only device-related complications were greater in the NIS (7.1% vs. 1.1%, p=.007). Patients from the PCD database displayed higher rates of the following complications: peripheral vascular (0.8% vs. 0.1%, p=.001), gastrointestinal (GI) (2.5% vs. 0.2%, pdatabases (p>.004). Based on surgicalapproach, the PCD reported higher GI and neurologic complication rates for combined anterior-posterior procedures (pdatabase revealed higher overall and individual complication rates and higher data granularity. The nationwide database may underestimate complications of patients with adult cervical deformity (ACD) particularly in regard to perioperative surgical details owing to coding and deformity generalizations. The surgeon-maintained database

  18. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  19. A web-based, relational database for studying glaciers in the Italian Alps

    Science.gov (United States)

    Nigrelli, G.; Chiarle, M.; Nuzzi, A.; Perotti, L.; Torta, G.; Giardino, M.

    2013-02-01

    Glaciers are among the best terrestrial indicators of climate change and thus glacier inventories have attracted a growing, worldwide interest in recent years. In Italy, the first official glacier inventory was completed in 1925 and 774 glacial bodies were identified. As the amount of data continues to increase, and new techniques become available, there is a growing demand for computer tools that can efficiently manage the collected data. The Research Institute for Geo-hydrological Protection of the National Research Council, in cooperation with the Departments of Computer Science and Earth Sciences of the University of Turin, created a database that provides a modern tool for storing, processing and sharing glaciological data. The database was developed according to the need of storing heterogeneous information, which can be retrieved through a set of web search queries. The database's architecture is server-side, and was designed by means of an open source software. The website interface, simple and intuitive, was intended to meet the needs of a distributed public: through this interface, any type of glaciological data can be managed, specific queries can be performed, and the results can be exported in a standard format. The use of a relational database to store and organize a large variety of information about Italian glaciers collected over the last hundred years constitutes a significant step forward in ensuring the safety and accessibility of such data. Moreover, the same benefits also apply to the enhanced operability for handling information in the future, including new and emerging types of data formats, such as geographic and multimedia files. Future developments include the integration of cartographic data, such as base maps, satellite images and vector data. The relational database described in this paper will be the heart of a new geographic system that will merge data, data attributes and maps, leading to a complete description of Italian glacial

  20. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  1. Heterogeneous distribution of a diffusional tracer in the aortic wall of normal and atherosclerotic rabbits

    International Nuclear Information System (INIS)

    Tsutsui, H.; Tomoike, H.; Nakamura, M.

    1990-01-01

    Tracer distribution as an index of nutritional support across the thoracic and abdominal aortas in rabbits in the presence or absence of atherosclerotic lesions was evaluated using [ 14 C]antipyrine, a metabolically inert, diffusible indicator. Intimal plaques were produced by endothelial balloon denudation of the thoracic aorta and a 1% cholesterol diet. After a steady intravenous infusion of 200 microCi of [ 14 C]antipyrine for 60 seconds, thoracic and abdominal aortas and the heart were excised, and autoradiograms of 20-microns-thick sections were quantified, using microcomputer-aided densitometry. Regional radioactivity and regional diffusional support, as an index of nutritional flow estimated from the timed collections of arterial blood, was 367 and 421 nCi.g-1 (82 and 106 ml.min-1.100 g-1) in thoracic aortic media of the normal and atherosclerotic rabbits, respectively. Radioactivity at the thickened intima was 179 nCi.g-1 (p less than 0.01 versus media). The gruel was noted at a deeper site within the thickened intima, and diffusional support here was 110 nCi.g-1 (p less than 0.01 versus an average radioactivity at the thickened intima). After ligating the intercostal arteries, regional tracer distribution in the media beneath the fibrofatty lesion, but not the plaque-free intima, was reduced to 46%. Thus, in the presence of advanced intimal thickening, the heterogeneous distribution of diffusional flow is prominent across the vessel wall, and abluminal routes are crucial to meet the increased demands of nutritional requirements

  2. Explaining local-scale species distributions: relative contributions of spatial autocorrelation and landscape heterogeneity for an avian assemblage.

    Directory of Open Access Journals (Sweden)

    Brady J Mattsson

    Full Text Available Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition.

  3. Overload cascading failure on complex networks with heterogeneous load redistribution

    Science.gov (United States)

    Hou, Yueyi; Xing, Xiaoyun; Li, Menghui; Zeng, An; Wang, Yougui

    2017-09-01

    Many real systems including the Internet, power-grid and financial networks experience rare but large overload cascading failures triggered by small initial shocks. Many models on complex networks have been developed to investigate this phenomenon. Most of these models are based on the load redistribution process and assume that the load on a failed node shifts to nearby nodes in the networks either evenly or according to the load distribution rule before the cascade. Inspired by the fact that real power-grid tends to place the excess load on the nodes with high remaining capacities, we study a heterogeneous load redistribution mechanism in a simplified sandpile model in this paper. We find that weak heterogeneity in load redistribution can effectively mitigate the cascade while strong heterogeneity in load redistribution may even enlarge the size of the final failure. With a parameter θ to control the degree of the redistribution heterogeneity, we identify a rather robust optimal θ∗ = 1. Finally, we find that θ∗ tends to shift to a larger value if the initial sand distribution is homogeneous.

  4. Design considerations for large heterogeneous liquid-metal fast breeder reactors

    International Nuclear Information System (INIS)

    Tzanos, C.P.; Barthold, W.P.

    1977-01-01

    A systematic method for designing heterogeneous configurations having a near-zero value of sodium void reactivity is presented. It is based on the following principles: (a) the thickness of the internal blanket zones should be such that the reactivity change resulting from voiding any core zone is practically independent of any further increase in the thickness of these zones, and (b) the sodium void reactivity of each core zone must have a near-zero value. Neutronic coupling among the core zones of heterogeneous configurations decreases as the thickness of the internal blanket zones increases. To quantify coupling, Avery's coupling coefficients are used. Reduced coupling among the core zones of a heterogeneous design, compared to a homogeneous design, results in (a) increased sensitivity of the power distribution to enrichment distribution perturbations, (b) reduced reactivity worth of local perturbations, and (c) higher cladding temperatures during operational transients initiated by local perturbations. Heterogeneous designs compared to equivalent homogeneous designs have (a) lower core Doppler coefficient values, (b) larger fuel compaction reactivities, and (c) higher maximum cladding temperatures

  5. Modeling heterogeneous unsaturated porous media flow at Yucca Mountain

    International Nuclear Information System (INIS)

    Robey, T.H.

    1994-01-01

    Geologic systems are inherently heterogeneous and this heterogeneity can have a significant impact on unsaturated flow through porous media. Most previous efforts to model groundwater flow through Yucca Mountain have used stratigraphic units with homogeneous properties. However, modeling heterogeneous porous and fractured tuff in a more realistic manner requires numerical methods for generating heterogeneous simulations of the media, scaling of material properties from core scale to computational scale, and flow modeling that allows channeling. The Yucca Mountain test case of the INTRAVAL project is used to test the numerical approaches. Geostatistics is used to generate more realistic representations of the stratigraphic units and heterogeneity within units is generated using sampling from property distributions. Scaling problems are reduced using an adaptive grid that minimizes heterogeneity within each flow element. A flow code based on the dual mixed-finite-element method that allows for heterogeneity and channeling is employed. In the Yucca Mountain test case, the simulated volumetric water contents matched the measured values at drill hole USW UZ-16 except in the nonwelded portion of Prow Pass

  6. Electron beam dosimetry in heterogeneous phantoms using a MAGIC normoxic polymer gel

    International Nuclear Information System (INIS)

    Ghahraman Asl, R.; Nedaie, H.; Bolouri, B.; Arbabi, A.

    2010-01-01

    Nowadays radiosensitive polymer gels are used as a reliable dosimetry tool for verification of 3D dose distributions. Special characteristics of these dosimeters have made them useful for verification of complex dose distributions in clinical situations. The aim of this work was to evaluate the capability of a normoxic polymer gel to determine electron dose distributions in different slab phantoms in presence of small heterogeneities. Materials and Methods: Different cylindrical phantoms consisting gel were used under slab phantoms during each irradiation. MR images of irradiated gel phantoms were obtained to determine their R2 relaxation maps. 1D and 2D lateral dose profiles were acquired at depths of 1 cm for an 8 MeV beam and 1 and 4 cm for the 15 MeV energy, and then compared with the lateral dose profiles measured using a diode detector. In addition, 3D dose distributions around these heterogeneities for the same energies and depths were measured using a gel dosimeter. Results: Dose resolution for MR gel images at the range of 0-10 Gy was less than 1.55 Gy. Mean dose difference and distance to agreement for dose profiles were 2.6% and 2.2 mm, respectively. The results of the MAGIC-type polymer gel for bone heterogeneity at 8 MeV showed a reduction in dose of approximately 50%, and 30% and 10% at depths 1 and 4 cm at 15 MeV. However, for air heterogeneity increases in dose of approximately 50% at depth 1 cm under the heterogeneity at 8 MeV and 20% and 45% respectively at 15 MeV were observed. Discussion and Conclusion: Generally, electron beam distributions are significantly altered in the presence of tissue inhomogeneities such as bone and air cavities, this being related to mass stopping and mass scattering powers of heterogeneous materials. At the same time, hot and cold scatter lobes under heterogeneity regions due to scatter edge effects were also seen. However, these effects (increased dose, reduced dose, hot and cold spots) at deeper depths, are

  7. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  8. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  9. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  10. How heterogeneous susceptibility and recovery rates affect the spread of epidemics on networks

    Directory of Open Access Journals (Sweden)

    Wei Gou

    2017-08-01

    Full Text Available In this paper, an extended heterogeneous SIR model is proposed, which generalizes the heterogeneous mean-field theory. Different from the traditional heterogeneous mean-field model only taking into account the heterogeneity of degree, our model considers not only the heterogeneity of degree but also the heterogeneity of susceptibility and recovery rates. Then, we analytically study the basic reproductive number and the final epidemic size. Combining with numerical simulations, it is found that the basic reproductive number depends on the mean of distributions of susceptibility and disease course when both of them are independent. If the mean of these two distributions is identical, increasing the variance of susceptibility may block the spread of epidemics, while the corresponding increase in the variance of disease course has little effect on the final epidemic size. It is also shown that positive correlations between individual susceptibility, course of disease and the square of degree make the population more vulnerable to epidemic and avail to the epidemic prevalence, whereas the negative correlations make the population less vulnerable and impede the epidemic prevalence. Keywords: Networks, Heterogeneity, Susceptibility, Recovery rates, Correlation, The basic reproductive number, The final epidemic size

  11. Application of cluster and discriminant analyses to diagnose lithological heterogeneity of the parent material according to its particle-size distribution

    Science.gov (United States)

    Giniyatullin, K. G.; Valeeva, A. A.; Smirnova, E. V.

    2017-08-01

    Particle-size distribution in soddy-podzolic and light gray forest soils of the Botanical Garden of Kazan Federal University has been studied. The cluster analysis of data on the samples from genetic soil horizons attests to the lithological heterogeneity of the profiles of all the studied soils. It is probable that they are developed from the two-layered sediments with the upper colluvial layer underlain by the alluvial layer. According to the discriminant analysis, the major contribution to the discrimination of colluvial and alluvial layers is that of the fraction >0.25 mm. The results of canonical analysis show that there is only one significant discriminant function that separates alluvial and colluvial sediments on the investigated territory. The discriminant function correlates with the contents of fractions 0.05-0.01, 0.25-0.05, and >0.25 mm. Classification functions making it possible to distinguish between alluvial and colluvial sediments have been calculated. Statistical assessment of particle-size distribution data obtained for the plow horizons on ten plowed fields within the garden indicates that this horizon is formed from colluvial sediments. We conclude that the contents of separate fractions and their ratios cannot be used as a universal criterion of the lithological heterogeneity. However, adequate combination of the cluster and discriminant analyses makes it possible to give a comprehensive assessment of the lithology of soil samples from data on the contents of sand and silt fractions, which considerably increases the information value and reliability of the results.

  12. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    Science.gov (United States)

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  13. IDENTIFIABILITY VERSUS HETEROGENEITY IN GROUNDWATER MODELING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A M BENALI

    2003-06-01

    Full Text Available Review of history matching of reservoirs parameters in groundwater flow raises the problem of identifiability of aquifer systems. Lack of identifiability means that there exists parameters to which the heads are insensitive. From the guidelines of the study of the homogeneous case, we inspect the identifiability of the distributed transmissivity field of heterogeneous groundwater aquifers. These are derived from multiple realizations of a random function Y = log T  whose probability distribution function is normal. We follow the identifiability of the autocorrelated block transmissivities through the measure of the sensitivity of the local derivatives DTh = (∂hi  ∕ ∂Tj computed for each sample of a population N (0; σY, αY. Results obtained from an analysis of Monte Carlo type suggest that the more a system is heterogeneous, the less it is identifiable.

  14. Enhancing yeast transcription analysis through integration of heterogeneous data

    DEFF Research Database (Denmark)

    Grotkjær, Thomas; Nielsen, Jens

    2004-01-01

    of Saccharomyces cerevisiae whole genome transcription data. A special focus is on the quantitative aspects of normalisation and mathematical modelling approaches, since they are expected to play an increasing role in future DNA microarray analysis studies. Data analysis is exemplified with cluster analysis......DNA microarray technology enables the simultaneous measurement of the transcript level of thousands of genes. Primary analysis can be done with basic statistical tools and cluster analysis, but effective and in depth analysis of the vast amount of transcription data requires integration with data...... from several heterogeneous data Sources, such as upstream promoter sequences, genome-scale metabolic models, annotation databases and other experimental data. In this review, we discuss how experimental design, normalisation, heterogeneous data and mathematical modelling can enhance analysis...

  15. Derivation of Batho's correction factor for heterogeneities

    International Nuclear Information System (INIS)

    Lulu, B.A.; Bjaerngard, B.E.

    1982-01-01

    Batho's correction factor for dose in a heterogeneous, layered medium is derived from the tissue--air ratio method (TARM). The reason why the Batho factor is superior to the TARM factor at low energy is ascribed to the fact that it accounts for the distribution of the scatter-generating matter along the centerline. The poor behavior of the Batho factor at high energies is explained as a consequence of the lack of electron equilibrium at appreciable depth below the surface. Key words: Batho factor, heterogeneity, inhomogeneity, tissue--air ratio method

  16. Distributed monitoring system based on Icinga

    International Nuclear Information System (INIS)

    Haen, C.; Bonaccorsi, E.; Neufeld, N.

    2012-01-01

    The LHCb online system relies on a large and heterogeneous IT infrastructure: it comprises more than 2000 servers and embedded systems and more than 200 network devices. Many of these equipment are critical in order to run the experiment, and it is important to have a monitoring solution efficient enough so that the experts can diagnose and act quickly. While our previous system was based on a central Nagios server, our current system uses a distributed Icinga infrastructure. We have a single instance of Icinga that schedules the checks and deals with the results but we have other servers (called 'workers') to perform the checks. The load induced on the worker is negligible, whereas the central server is fully busy. A client/server model based on Gearman manages queues the clients use to get their tasks and give their results. The other interesting feature of Icinga is the database back-end. Icinga will log the result of every action and check-result in a database. The new installation is now running 36000 service checks on 2100 hosts with 50 Gearman workers. Performances have dramatically improved

  17. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    Science.gov (United States)

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  18. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  19. The genetic validation of heterogeneity in schizophrenia.

    Science.gov (United States)

    Tsutsumi, Atsushi; Glatt, Stephen J; Kanazawa, Tetsufumi; Kawashige, Seiya; Uenishi, Hiroyuki; Hokyo, Akira; Kaneko, Takao; Moritani, Makiko; Kikuyama, Hiroki; Koh, Jun; Matsumura, Hitoshi; Yoneda, Hiroshi

    2011-10-07

    Schizophrenia is a heritable disorder, however clear genetic architecture has not been detected. To overcome this state of uncertainty, the SZGene database has been established by including all published case-control genetic association studies appearing in peer-reviewed journals. In the current study, we aimed to determine if genetic variants strongly suggested by SZGene are associated with risk of schizophrenia in our case-control samples of Japanese ancestry. In addition, by employing the additive model for aggregating the effect of seven variants, we aimed to verify the genetic heterogeneity of schizophrenia diagnosed by an operative diagnostic manual, the DSM-IV. Each positively suggested genetic polymorphism was ranked according to its p-value, then the seven top-ranked variants (p Japanese population. It is also important to aggregate the updated positive variants in the SZGene database when the replication work is conducted.

  20. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  1. Spatial heterogeneity and the distribution of bromeliad pollinators in the Atlantic Forest

    Science.gov (United States)

    Varassin, Isabela Galarda; Sazima, Marlies

    2012-08-01

    Interactions between plants and their pollinators are influenced by environmental heterogeneity, resulting in small-scale variations in interactions. This may influence pollinator co-existence and plant reproductive success. This study, conducted at the Estação Biológica de Santa Lúcia (EBSL), a remnant of the Atlantic Forest in southeastern Brazil, investigated the effect of small-scale spatial variations on the interactions between bromeliads and their pollinators. Overall, hummingbirds pollinated 19 of 23 bromeliad species, of which 11 were also pollinated by bees and/or butterflies. However, spatial heterogeneity unrelated to the spatial location of plots or bromeliad species abundance influenced the presence of pollinators. Hummingbirds were the most ubiquitous pollinators at the high-elevation transect, with insect participation clearly declining as transect elevation increased. In the redundancy analysis, the presence of the hummingbird species Phaethornis eurynome, Phaethornis squalidus, Ramphodon naevius, and Thalurania glaucopis, and the butterfly species Heliconius erato and Heliconius nattereri in each plot was correlated with environmental factors such as bromeliad and tree abundance, and was also correlated with horizontal diversity. Since plant-pollinator interactions varied within the environmental mosaics at the study site, this small-scale environmental heterogeneity may relax competition among pollinators, and may explain the high diversity of bromeliads and pollinators generally found in the Atlantic Forest.

  2. The Current Landscape of US Pediatric Anesthesiologists: Demographic Characteristics and Geographic Distribution.

    Science.gov (United States)

    Muffly, Matthew K; Muffly, Tyler M; Weterings, Robbie; Singleton, Mark; Honkanen, Anita

    2016-07-01

    There is no comprehensive database of pediatric anesthesiologists, their demographic characteristics, or geographic location in the United States. We endeavored to create a comprehensive database of pediatric anesthesiologists by merging individuals identified as US pediatric anesthesiologists by the American Board of Anesthesiology, National Provider Identifier registry, Healthgrades.com database, and the Society for Pediatric Anesthesia membership list as of November 5, 2015. Professorial rank was accessed via the Association of American Medical Colleges and other online sources. Descriptive statistics characterized pediatric anesthesiologists' demographics. Pediatric anesthesiologists' locations at the city and state level were geocoded and mapped with the use of ArcGIS Desktop 10.1 mapping software (Redlands, CA). We identified 4048 pediatric anesthesiologists in the United States, which is approximately 8.8% of the physician anesthesiology workforce (n = 46,000). The median age of pediatric anesthesiologists was 49 years (interquartile range, 40-57 years), and the majority (56.4%) were men. Approximately two-thirds of identified pediatric anesthesiologists were subspecialty board certified in pediatric anesthesiology, and 33% of pediatric anesthesiologists had an identified academic affiliation. There is substantial heterogeneity in the geographic distribution of pediatric anesthesiologists by state and US Census Division with urban clustering. This description of pediatric anesthesiologists' demographic characteristics and geographic distribution fills an important gap in our understanding of pediatric anesthesia systems of care.

  3. Supporting Telecom Business Processes by means of Workflow Management and Federated Databases

    NARCIS (Netherlands)

    Nijenhuis, Wim; Jonker, Willem; Grefen, P.W.P.J.

    This report addresses the issues related to the use of workflow management systems and federated databases to support business processes that operate on large and heterogeneous collections of autonomous information systems. We discuss how they can enhance the overall IT-architecture. Starting from

  4. Evaluation of heterogeneity corrections in stereotactic body radiation therapy for the lung

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Narita, Yuichiro; Nakata, Manabu

    2008-01-01

    The purpose was to evaluate impact of heterogeneity corrections on dose distributions for stereotactic body radiation therapy (SBRT) for the lung. This study was conducted with the treatment plans of 28 cases in which we performed SBRT for solitary lung tumors with 48 Gy in 12-Gy fractions at the isocenter. The treatment plans were recalculated under three conditions of heterogeneity correction as follows: pencil beam convolution with Batho power law correction (PBC-BPL), pencil beam convolution with no correction (PBC-NC), and anisotropic analytical algorithm with heterogeneity correction (AAA). Dose-volumetric data were compared among the three conditions. Heterogeneity corrections had a significant impact on all dose-volumetric parameters. Means of isocenter dose were 48.0 Gy, 44.6 Gy, and 48.4 Gy in PBC-BPL, PBC-NC, and AAA, respectively. PTV D95 were 45.2 Gy, 41.1 Gy, and 42.1 Gy, and V20 of the lung were 4.1%, 3.7%, and 3.9%, respectively. Significant differences in dose distribution were observed among heterogeneity corrections. Attention needs to be paid to the differences. (author)

  5. Identifying and quantifying heterogeneity in high content analysis: application of heterogeneity indices to drug discovery.

    Directory of Open Access Journals (Sweden)

    Albert H Gough

    Full Text Available One of the greatest challenges in biomedical research, drug discovery and diagnostics is understanding how seemingly identical cells can respond differently to perturbagens including drugs for disease treatment. Although heterogeneity has become an accepted characteristic of a population of cells, in drug discovery it is not routinely evaluated or reported. The standard practice for cell-based, high content assays has been to assume a normal distribution and to report a well-to-well average value with a standard deviation. To address this important issue we sought to define a method that could be readily implemented to identify, quantify and characterize heterogeneity in cellular and small organism assays to guide decisions during drug discovery and experimental cell/tissue profiling. Our study revealed that heterogeneity can be effectively identified and quantified with three indices that indicate diversity, non-normality and percent outliers. The indices were evaluated using the induction and inhibition of STAT3 activation in five cell lines where the systems response including sample preparation and instrument performance were well characterized and controlled. These heterogeneity indices provide a standardized method that can easily be integrated into small and large scale screening or profiling projects to guide interpretation of the biology, as well as the development of therapeutics and diagnostics. Understanding the heterogeneity in the response to perturbagens will become a critical factor in designing strategies for the development of therapeutics including targeted polypharmacology.

  6. CFD analysis of a symmetrical planar SOFC with heterogeneous electrode properties

    International Nuclear Information System (INIS)

    Shi Junxiang; Xue Xingjian

    2010-01-01

    A comprehensive 2-D CFD model is developed to investigate bi-electrode supported cell (BSC) performance. The model takes into account the coupled complex transport phenomena of mass/heat transfer, charge (electron/ion) transport, and electrochemical reactions. The uniqueness of this modeling work is that heterogeneous electrode properties are taken into account, which includes not only linear functionally graded porosity distribution but also various nonlinear distributions in a general sense according to porous electrode features in BSC design. Extensive numerical analysis is performed to elucidate various heterogeneous porous electrode property effects on cell performance. Results indicate that cell performance is strongly dependent on porous microstructure distributions of electrodes. Among the various porosity distributions, inverse parabolic porosity distribution shows promising effects on cell performance. For a given porosity distribution of electrodes, cell performance is also dependent on operating conditions, typically fuel/gas pressure losses across the electrodes. The mathematical model developed in this paper can be utilized for high performance BSC SOFC design and optimization.

  7. Model of monopolistic competition with heterogeneous labor

    Directory of Open Access Journals (Sweden)

    Filatov Alexander

    2017-01-01

    Full Text Available The paper presents a tool for modelling monopolistic competition markets, based on Dixit-Stiglitz ideology but taking into account heterogeneity at labor market. We analyse several modifications of a two-sector general equilibrium model. In the basic one with two levels of workers qualification their shares are determined endogenously on the base of comparison between the higher wage of the skilled worker and heterogeneous education costs, also taking into account the labor mobility between the manufacture and agriculture sector. The model is generalized for the case of continuous distribution of labor qualification. The impact of the model parameters (ratio of fixed and variable costs, market size, heterogeneity in productivity, elasticity of substitution, etc. on the obtained equilibrium prices, quantities, wages, number and size of firms, social welfare is investigated.

  8. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite

    Energy Technology Data Exchange (ETDEWEB)

    Guimarães, Vanessa [Instituto de Ciências da Terra – Porto, DGAOT, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Geobiotec. Departamento de Geociências da Universidade de Aveiro, Campo Universitário de Santiago, 3810-193 Aveiro (Portugal); Rodríguez-Castellón, Enrique; Algarra, Manuel [Departamento de Química Inorgánica, Facultad de Ciencias, Universidad de Málaga. Campus de Teatino s/n, 29071 Málaga (Spain); Rocha, Fernando [Geobiotec. Departamento de Geociências da Universidade de Aveiro, Campo Universitário de Santiago, 3810-193 Aveiro (Portugal); Bobos, Iuliu, E-mail: ibobos@fc.up.pt [Instituto de Ciências da Terra – Porto, DGAOT, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal)

    2016-11-05

    Highlights: • The UO{sub 2}{sup 2+} sorption at pH 4 and 6 on heterogeneous smectite structure. • The cation exchange process is affected by layer charge distribution. • Surface complexation and cation exchange modelling. • New binding energy components identified by X-ray photoelectron spectroscopy. - Abstract: The UO{sub 2}{sup 2+} adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I = 0.02 M) and pH 6 (I = 0.2 M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8 nm (sample PS2), to 5.1 nm (sample PS3) and, to 7.4 nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO{sub 2}{sup 2+} sorption on smectite. The amount of UO{sub 2}{sup 2+} adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8 ± 0.3 and 382.2 ± 0.3 eV, assigned to hydrated UO{sub 2}{sup 2+} adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f{sub 7/2} peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3 ± 0.3 and 381.8 ± 0.3 eV assigned to ≡AlOUO{sub 2}{sup +} and ≡SiOUO{sub 2}{sup +} surface species were observed at pH 6.

  9. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite

    International Nuclear Information System (INIS)

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-01-01

    Highlights: • The UO_2"2"+ sorption at pH 4 and 6 on heterogeneous smectite structure. • The cation exchange process is affected by layer charge distribution. • Surface complexation and cation exchange modelling. • New binding energy components identified by X-ray photoelectron spectroscopy. - Abstract: The UO_2"2"+ adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I = 0.02 M) and pH 6 (I = 0.2 M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8 nm (sample PS2), to 5.1 nm (sample PS3) and, to 7.4 nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO_2"2"+ sorption on smectite. The amount of UO_2"2"+ adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8 ± 0.3 and 382.2 ± 0.3 eV, assigned to hydrated UO_2"2"+ adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f_7_/_2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3 ± 0.3 and 381.8 ± 0.3 eV assigned to ≡AlOUO_2"+ and ≡SiOUO_2"+ surface species were observed at pH 6.

  10. Studies of spatial decoupling in heterogeneous LMFBR critical assemblies

    International Nuclear Information System (INIS)

    Brumbach, S.B.; Goin, R.W.; Carpenter, S.G.

    1984-01-01

    Recent measurements at the Zero Power Plutonium Reactor have studied the spatial decoupling in large, heterogeneous assemblies. These assemblies exhibited a significantly greater degree of decoupling than previous homogeneous assemblies of similar size. The flux distributions in these heterogeneous assemblies were very sensitive reactivity perturbations, and perturbed flux distributions were achieved relatively slowly. Decoupling was investigated using rod-drop, boron-oscillator and noise-coherence techniques which emphasized different times following the perturbations. Reactivity changes could be measured by analyzing the power history from a single detector using inverse kinetics methods with the assumption of an instantaneous efficiency change for the detector. For assemblies more decoupled than ZPPR-13, the instantaneous efficiency change assumption begins to be invalid

  11. Three-dimensional cluster formation and structure in heterogeneous dose distribution of intensity modulated radiation therapy.

    Science.gov (United States)

    Chao, Ming; Wei, Jie; Narayanasamy, Ganesh; Yuan, Yading; Lo, Yeh-Chi; Peñagarícano, José A

    2018-05-01

    To investigate three-dimensional cluster structure and its correlation to clinical endpoint in heterogeneous dose distributions from intensity modulated radiation therapy. Twenty-five clinical plans from twenty-one head and neck (HN) patients were used for a phenomenological study of the cluster structure formed from the dose distributions of organs at risks (OARs) close to the planning target volumes (PTVs). Initially, OAR clusters were searched to examine the pattern consistence among ten HN patients and five clinically similar plans from another HN patient. Second, clusters of the esophagus from another ten HN patients were scrutinized to correlate their sizes to radiobiological parameters. Finally, an extensive Monte Carlo (MC) procedure was implemented to gain deeper insights into the behavioral properties of the cluster formation. Clinical studies showed that OAR clusters had drastic differences despite similar PTV coverage among different patients, and the radiobiological parameters failed to positively correlate with the cluster sizes. MC study demonstrated the inverse relationship between the cluster size and the cluster connectivity, and the nonlinear changes in cluster size with dose thresholds. In addition, the clusters were insensitive to the shape of OARs. The results demonstrated that the cluster size could serve as an insightful index of normal tissue damage. The clinical outcome of the same dose-volume might be potentially different. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Three-phase flow in heterogeneous wettability porous media; Deplacements triphasiques en milieux poreux de mouillabilite heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Jaffrennou-Laroche, C

    1998-11-26

    Better understanding and modelling of three-phase flow through porous media is of great interest, especially for improved oil recovery methods such as gas injection processes. Early theoretical and experimental studies have already demonstrated that the wettability characteristics of the solid surface and the spreading characteristics of the fluid system hold the key roles. This observation is confirmed by our theoretical results using DLP theory on the stability and the thickness of static oil films. In most of the works related to three-phase flow processes, homogeneous wettability is assumed. There exist only a few studies demonstrating the tremendous impact of the wettability heterogeneities on gas injection. The objective of the present work is twofold: to demonstrate the effect of small scale wettability heterogeneities on gas injection efficiency, and to develop a tool to predict this impact for various patterns and spatial distributions. To this end an experimental investigation in transparent glass micro-models is performed and a theoretical simulator is developed. Secondary and tertiary gas injections are performed for different heterogeneity patterns obtained by selective silane grafting. Displacement sequences are video-recorded and fluid saturations are determined by image analysis. Visualization of the displacement mechanisms provides the network model with the basic rules for water/oil and water/oil/gas motion. In water/oil displacement, drainage and imbibition occur according to the local wettability. Three-phase displacement is dominated by drainage mechanisms. The simulator allows the flow of oil through wetting films in the oil-wet regions and through spreading films on water in the water-wet regions. The effect of the wettability heterogeneities on: displacement mechanisms, sweep efficiency, and fluid distribution in three-phase gas injection is clearly demonstrated and successfully described by the network simulator. (author) 175 refs.

  13. A semantic data dictionary method for database schema integration in CIESIN

    Science.gov (United States)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  14. Domain Regeneration for Cross-Database Micro-Expression Recognition

    Science.gov (United States)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  15. Directory of IAEA databases. 3. ed.

    International Nuclear Information System (INIS)

    1993-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information. Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answer to the second two questions (documentation and media) is only listed when information has been made available

  16. Arcade: A Web-Java Based Framework for Distributed Computing

    Science.gov (United States)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  17. Heterogeneous firms, mark-ups and income inequality

    NARCIS (Netherlands)

    Tamminen, S.H.

    2014-01-01

    Firm heterogeneity affects not only the implications of trade policies for countries, but also income distributions within-countries since firms generate most of wage and capital income payments. Recently, both within-country wage- and capital income inequality have been rising in various countries.

  18. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  19. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  20. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  1. Scaling properties of conduction velocity in heterogeneous excitable media

    Science.gov (United States)

    Shajahan, T. K.; Borek, Bartłomiej; Shrier, Alvin; Glass, Leon

    2011-10-01

    Waves of excitation through excitable media, such as cardiac tissue, can propagate as plane waves or break up to form reentrant spiral waves. In diseased hearts reentrant waves can be associated with fatal cardiac arrhythmias. In this paper we investigate the conditions that lead to wave break, reentry, and propagation failure in mathematical models of heterogeneous excitable media. Two types of heterogeneities are considered: sinks are regions in space in which the voltage is fixed at its rest value, and breaks are nonconducting regions with no-flux boundary conditions. We find that randomly distributed heterogeneities in the medium have a decremental effect on the velocity, and above a critical density of such heterogeneities the conduction fails. Using numerical and analytical methods we derive the general relationship among the conduction velocity, density of heterogeneities, diffusion coefficient, and the rise time of the excitation in both two and three dimensions. This work helps us understand the factors leading to reduced propagation velocity and the formation of spiral waves in heterogeneous excitable media.

  2. PL/SQL and Bind Variable: the two ways to increase the efficiency of Network Databases

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2011-12-01

    Full Text Available Modern data analysis applications are driven by the Network databases. They are pushing traditional database and data warehousing technologies beyond their limits due to their massively increasing data volumes and demands for low latency. There are three major challenges in working with network databases: interoperability due to heterogeneous data repositories, proactively due to autonomy of data sources and high efficiency to meet the application demand. This paper provides the two ways to meet the third challenge of network databases. This goal can be achieved by network database administrator with the usage of PL/SQL blocks and bind variable. The paper will explain the effect of PL/SQL block and bind variable on Network database efficiency to meet the modern data analysis application demand.

  3. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  4. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  5. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  6. An Ontology as a Tool for Representing Fuzzy Data in Relational Databases

    Directory of Open Access Journals (Sweden)

    Carmen Martinez-Cruz

    2012-11-01

    Full Text Available Several applications to represent classical or fuzzy data in databases have been developed in the last two decades. However, these representations present some limitations specially related with the system portability and complexity. Ontologies provides a mechanism to represent data in an implementation-independent and web-accessible way. To get advantage of this, in this paper, an ontology, that represents fuzzy relational database model, has been redefined to communicate users or applications with fuzzy data stored in fuzzy databases. The communication channel established between the ontology and any Relational Database Management System (RDBMS is analysed in depth throughout the text to justify some of the advantages of the system: expressiveness, portability and platform heterogeneity. Moreover, some tools have been developed to define and manage fuzzy and classical data in relational databases using this ontology. Even an application that performs fuzzy queries using the same technology is included in this proposal together with some examples using real databases.

  7. Set-membership estimations for the evolution of infectious diseases in heterogeneous populations.

    Science.gov (United States)

    Tsachev, Tsvetomir; Veliov, Vladimir M; Widder, Andreas

    2017-04-01

    The paper presents an approach for set-membership estimation of the state of a heterogeneous population in which an infectious disease is spreading. The population state may consist of susceptible, infected, recovered, etc. groups, where the individuals are heterogeneous with respect to traits, relevant to the particular disease. Set-membership estimations in this context are reasonable, since only vague information about the distribution of the population along the space of heterogeneity is available in practice. The presented approach comprises adapted versions of methods which are known in estimation and control theory, and involve solving parametrized families of optimization problems. Since the models of disease spreading in heterogeneous populations involve distributed systems (with non-local dynamics and endogenous boundary conditions), these problems are non-standard. The paper develops the needed theoretical instruments and a solution scheme. SI and SIR models of epidemic diseases are considered as case studies and the results reveal qualitative properties that may be of interest.

  8. The coalescence of heterogeneous liquid metal on nano substrate

    Science.gov (United States)

    Wang, Long; Li, Yifan; Zhou, Xuyan; Li, Tao; Li, Hui

    2017-06-01

    Molecular dynamics simulation has been performed to study the asymmetric coalescence of heterogeneous liquid metal on graphene. Simulation results show that the anomalies in the drop coalescence is mainly caused by the wettability of heterogeneous liquid metal. The silver atoms incline to distribute on the outer layer of the gold and copper droplets, revealing that the structure is determined by the interaction between different metal atoms. The coalescence and fusion of heterogeneous liquid metal drop can be predicted by comparing the wettability and the atomic mass of metallic liquid drops, which has important implications in the industrial application such as ink-jet printing and metallurgy.

  9. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography

    International Nuclear Information System (INIS)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-01-01

    Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity

  10. License - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE License License to Use This Database Last updated : 2014/07/17 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database. The license for this datab...re Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: ... ...re . With regard to this database, you are licensed to: freely access part or whole of this database, and ac...quire data; freely redistribute part or whole of the data from this database; and freely create and distribute database

  11. Effect of heterogeneous distribution of crosslink density on physical properties of radiation vulcanized NR (Natural Rubber) latex film

    International Nuclear Information System (INIS)

    Keizo Makuuchi; Fumio Yoshii; Miura, H.; Murakami, K.

    1996-01-01

    Thus a study has been carried out to investigate the effect of particle to particle variation in crosslink density on physical properties of radiation vulcanized NR latex film. NR latex was irradiated in small bottle by γ rays without vulcanization accelerator to provide latex rubber particles having homogeneous distribution of crosslink density. The doses were 30, 50, 100, 250, 300, 400, 500 and 600 kGy. Weight swelling ratio, gel fraction, tensile strength and elongation at break of the latex film from the mixed latex were measured. The vulcanization dose of this latex was 250 kGy. Then the two different latexes were mixed in a such way to adjust the average dose of 250 kGy to prepare a latex consisting of rubber particles having heterogeneous distribution of crosslink density. Tensile strength of the latex film was depressed by mixing. The reduction increased with increasing the decrease of gel fraction by mixing. However the reduction was not serious when the dose difference of two latexes was less than 200 kGy

  12. HETERO code, heterogeneous procedure for reactor calculation

    International Nuclear Information System (INIS)

    Jovanovic, S.M.; Raisic, N.M.

    1966-11-01

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor η n and flux distribution) is part of this report together with the example of RB reactor square lattice

  13. Translation from the collaborative OSM database to cartography

    Science.gov (United States)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  14. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target-species distribution and abundance studies. Section 1, Part 2. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  15. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target species distribution and abundance studies. Section 1, Part 1. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  16. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  17. Rethinking the evolution of specialization: A model for the evolution of phenotypic heterogeneity.

    Science.gov (United States)

    Rubin, Ilan N; Doebeli, Michael

    2017-12-21

    Phenotypic heterogeneity refers to genetically identical individuals that express different phenotypes, even when in the same environment. Traditionally, "bet-hedging" in fluctuating environments is offered as the explanation for the evolution of phenotypic heterogeneity. However, there are an increasing number of examples of microbial populations that display phenotypic heterogeneity in stable environments. Here we present an evolutionary model of phenotypic heterogeneity of microbial metabolism and a resultant theory for the evolution of phenotypic versus genetic specialization. We use two-dimensional adaptive dynamics to track the evolution of the population phenotype distribution of the expression of two metabolic processes with a concave trade-off. Rather than assume a Gaussian phenotype distribution, we use a Beta distribution that is capable of describing genotypes that manifest as individuals with two distinct phenotypes. Doing so, we find that environmental variation is not a necessary condition for the evolution of phenotypic heterogeneity, which can evolve as a form of specialization in a stable environment. There are two competing pressures driving the evolution of specialization: directional selection toward the evolution of phenotypic heterogeneity and disruptive selection toward genetically determined specialists. Because of the lack of a singular point in the two-dimensional adaptive dynamics and the fact that directional selection is a first order process, while disruptive selection is of second order, the evolution of phenotypic heterogeneity dominates and often precludes speciation. We find that branching, and therefore genetic specialization, occurs mainly under two conditions: the presence of a cost to maintaining a high phenotypic variance or when the effect of mutations is large. A cost to high phenotypic variance dampens the strength of selection toward phenotypic heterogeneity and, when sufficiently large, introduces a singular point into

  18. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  19. Heterogeneous modelling and finite element analysis of the femur

    Directory of Open Access Journals (Sweden)

    Zhang Binkai

    2017-01-01

    Full Text Available As the largest and longest bone in the human body, the femur has important research value and application prospects. This paper introduces a fast reconstruction method with Mimics and ANSYS software to realize the heterogeneous modelling of the femur according to Hu distribution of the CT series, and simulates it in various situations by finite element analysis to study the mechanical characteristics of the femur. The femoral heterogeneous model shows the distribution of bone mineral density and material properties, which can be used to assess the diagnosis and treatment of bone diseases. The stress concentration position of the femur under different conditions can be calculated by the simulation, which can provide reference for the design and material selection of prosthesis.

  20. Sparse covariance estimation in heterogeneous samples.

    Science.gov (United States)

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era.

  1. Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Fengguang [Univ. of Tennessee, Knoxville, TN (United States); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-06-01

    We present a new methodology for utilizing all CPU cores and all GPUs on a heterogeneous multicore and multi-GPU system to support matrix computations e ciently. Our approach is able to achieve the objectives of a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our main idea is to treat the heterogeneous system as a distributed-memory machine, and to use a heterogeneous 1-D block cyclic distribution to allocate data to the host system and GPUs to minimize communication. We have designed heterogeneous algorithms with two di erent tile sizes (one for CPU cores and the other for GPUs) to cope with processor heterogeneity. We propose an auto-tuning method to determine the best tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our experiments on a compute node with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs demonstrate good weak scalability, strong scalability, load balance, and e ciency of our approach.

  2. License - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG License License to Use This Database Last updated : 2013/08/07 You may use this database...se terms regarding the use of this database and the requirements you must follow in using this database. The license for this databas... Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: Ric...Japan is found here . With regard to this database, you are licensed to: freely access part or whole of this database..., and acquire data; freely redistribute part or whole of the data from this database; and freely create and distribute datab

  3. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    Science.gov (United States)

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  4. A Distributed Database System for Developing Ontological and Lexical Resources in Harmony

    NARCIS (Netherlands)

    Horák, A.; Vossen, P.T.J.M.; Rambousek, A.; Gelbukh, A.

    2010-01-01

    In this article, we present the basic ideas of creating a new information-rich lexical database of Dutch, called Cornetto, that is interconnected with corresponding English synsets and a formal ontology. The Cornetto database is based on two existing electronic dictionaries - the Referentie Bestand

  5. Multi-layer distributed storage of LHD plasma diagnostic database

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Kojima, Mamoru; Ohsuna, Masaki; Nonomura, Miki; Imazu, Setsuo; Nagayama, Yoshio

    2006-01-01

    At the end of LHD experimental campaign in 2003, the amount of whole plasma diagnostics raw data had reached 3.16 GB in a long-pulse experiment. This is a new world record in fusion plasma experiments, far beyond the previous value of 1.5 GB/shot. The total size of the LHD diagnostic data is about 21.6 TB for the whole six years of experiments, and it continues to grow at an increasing rate. The LHD diagnostic database and storage system, i.e. the LABCOM system, has a completely distributed architecture to be sufficiently flexible and easily expandable to maintain integrity of the total amount of data. It has three categories of the storage layer: OODBMS volumes in data acquisition servers, RAID servers, and mass storage systems, such as MO jukeboxes and DVD-R changers. These are equally accessible through the network. By data migration between them, they can be considered a virtual OODB extension area. Their data contents have been listed in a 'facilitator' PostgreSQL RDBMS, which contains about 6.2 million entries, and informs the optimized priority to clients requesting data. Using the 'glib' compression for all of the binary data and applying the three-tier application model for the OODB data transfer/retrieval, an optimized OODB read-out rate of 1.7 MB/s and effective client access speed of 3-25 MB/s have been achieved. As a result, the LABCOM data system has succeeded in combination of the use of RDBMS, OODBMS, RAID, and MSS to enable a virtual and always expandable storage volume, simultaneously with rapid data access. (author)

  6. Pleurochrysome: A Web Database of Pleurochrysis Transcripts and Orthologs Among Heterogeneous Algae

    Science.gov (United States)

    Fujiwara, Shoko; Takatsuka, Yukiko; Hirokawa, Yasutaka; Tsuzuki, Mikio; Takano, Tomoyuki; Kobayashi, Masaaki; Suda, Kunihiro; Asamizu, Erika; Yokoyama, Koji; Shibata, Daisuke; Tabata, Satoshi; Yano, Kentaro

    2016-01-01

    Pleurochrysis is a coccolithophorid genus, which belongs to the Coccolithales in the Haptophyta. The genus has been used extensively for biological research, together with Emiliania in the Isochrysidales, to understand distinctive features between the two coccolithophorid-including orders. However, molecular biological research on Pleurochrysis such as elucidation of the molecular mechanism behind coccolith formation has not made great progress at least in part because of lack of comprehensive gene information. To provide such information to the research community, we built an open web database, the Pleurochrysome (http://bioinf.mind.meiji.ac.jp/phapt/), which currently stores 9,023 unique gene sequences (designated as UNIGENEs) assembled from expressed sequence tag sequences of P. haptonemofera as core information. The UNIGENEs were annotated with gene sequences sharing significant homology, conserved domains, Gene Ontology, KEGG Orthology, predicted subcellular localization, open reading frames and orthologous relationship with genes of 10 other algal species, a cyanobacterium and the yeast Saccharomyces cerevisiae. This sequence and annotation information can be easily accessed via several search functions. Besides fundamental functions such as BLAST and keyword searches, this database also offers search functions to explore orthologous genes in the 12 organisms and to seek novel genes. The Pleurochrysome will promote molecular biological and phylogenetic research on coccolithophorids and other haptophytes by helping scientists mine data from the primary transcriptome of P. haptonemofera. PMID:26746174

  7. BIOZON: a system for unification, management and analysis of heterogeneous biological data

    Directory of Open Access Journals (Sweden)

    Yona Golan

    2006-02-01

    Full Text Available Abstract Background Integration of heterogeneous data types is a challenging problem, especially in biology, where the number of databases and data types increase rapidly. Amongst the problems that one has to face are integrity, consistency, redundancy, connectivity, expressiveness and updatability. Description Here we present a system (Biozon that addresses these problems, and offers biologists a new knowledge resource to navigate through and explore. Biozon unifies multiple biological databases consisting of a variety of data types (such as DNA sequences, proteins, interactions and cellular pathways. It is fundamentally different from previous efforts as it uses a single extensive and tightly connected graph schema wrapped with hierarchical ontology of documents and relations. Beyond warehousing existing data, Biozon computes and stores novel derived data, such as similarity relationships and functional predictions. The integration of similarity data allows propagation of knowledge through inference and fuzzy searches. Sophisticated methods of query that span multiple data types were implemented and first-of-a-kind biological ranking systems were explored and integrated. Conclusion The Biozon system is an extensive knowledge resource of heterogeneous biological data. Currently, it holds more than 100 million biological documents and 6.5 billion relations between them. The database is accessible through an advanced web interface that supports complex queries, "fuzzy" searches, data materialization and more, online at http://biozon.org.

  8. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  9. USBombus, a database of contemporary survey data for North American Bumble Bees (Hymenoptera, Apidae, Bombus) distributed in the United States.

    Science.gov (United States)

    Koch, Jonathan B; Lozier, Jeffrey; Strange, James P; Ikerd, Harold; Griswold, Terry; Cordes, Nils; Solter, Leellen; Stewart, Isaac; Cameron, Sydney A

    2015-01-01

    Bumble bees (Hymenoptera: Apidae, Bombus) are pollinators of wild and economically important flowering plants. However, at least four bumble bee species have declined significantly in population abundance and geographic range relative to historic estimates, and one species is possibly extinct. While a wealth of historic data is now available for many of the North American species found to be in decline in online databases, systematic survey data of stable species is still not publically available. The availability of contemporary survey data is critically important for the future monitoring of wild bumble bee populations. Without such data, the ability to ascertain the conservation status of bumble bees in the United States will remain challenging. This paper describes USBombus, a large database that represents the outcomes of one of the largest standardized surveys of bumble bee pollinators (Hymenoptera, Apidae, Bombus) globally. The motivation to collect live bumble bees across the United States was to examine the decline and conservation status of Bombus affinis, B. occidentalis, B. pensylvanicus, and B. terricola. Prior to our national survey of bumble bees in the United States from 2007 to 2010, there have only been regional accounts of bumble bee abundance and richness. In addition to surveying declining bumble bees, we also collected and documented a diversity of co-occuring bumble bees. However we have not yet completely reported their distribution and diversity onto a public online platform. Now, for the first time, we report the geographic distribution of bumble bees reported to be in decline (Cameron et al. 2011), as well as bumble bees that appeared to be stable on a large geographic scale in the United States (not in decline). In this database we report a total of 17,930 adult occurrence records across 397 locations and 39 species of Bombus detected in our national survey. We summarize their abundance and distribution across the United States and

  10. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  11. Macroeconomic Policies and Agent Heterogeneity

    OpenAIRE

    GOTTLIEB, Charles

    2012-01-01

    Defence date: 24 February 2012 Examining Board: Giancarlo Corsetti, Arpad Abraham, Juan Carlos Conesa, Jonathan Heathcote. This thesis contributes to the understanding of macroeconomic policies’ impact on the distribution of wealth. It belongs to the strand of literature that departs from the representative agent assumption and perceives agent heterogeneity and the induced disparities in wealth accumulation, as an important dimension of economic policy-making. Within such economic envir...

  12. Spatial Heterogeneity of the Forest Canopy Scales with the Heterogeneity of an Understory Shrub Based on Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Catherine K. Denny

    2017-04-01

    Full Text Available Spatial heterogeneity of vegetation is an important landscape characteristic, but is difficult to assess due to scale-dependence. Here we examine how spatial patterns in the forest canopy affect those of understory plants, using the shrub Canada buffaloberry (Shepherdia canadensis (L. Nutt. as a focal species. Evergreen and deciduous forest canopy and buffaloberry shrub presence were measured with line-intercept sampling along ten 2-km transects in the Rocky Mountain foothills of west-central Alberta, Canada. Relationships between overstory canopy and understory buffaloberry presence were assessed for scales ranging from 2 m to 502 m. Fractal dimensions of both canopy and buffaloberry were estimated and then related using box-counting methods to evaluate spatial heterogeneity based on patch distribution and abundance. Effects of canopy presence on buffaloberry were scale-dependent, with shrub presence negatively related to evergreen canopy cover and positively related to deciduous cover. The effect of evergreen canopy was significant at a local scale between 2 m and 42 m, while that of deciduous canopy was significant at a meso-scale between 150 m and 358 m. Fractal analysis indicated that buffaloberry heterogeneity positively scaled with evergreen canopy heterogeneity, but was unrelated to that of deciduous canopy. This study demonstrates that evergreen canopy cover is a determinant of buffaloberry heterogeneity, highlighting the importance of spatial scale and canopy composition in understanding canopy-understory relationships.

  13. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    Science.gov (United States)

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and

  14. Characterizing heterogeneous dynamics at hydrated electrode surfaces

    Science.gov (United States)

    Willard, Adam P.; Limmer, David T.; Madden, Paul A.; Chandler, David

    2013-05-01

    In models of Pt 111 and Pt 100 surfaces in water, motions of molecules in the first hydration layer are spatially and temporally correlated. To interpret these collective motions, we apply quantitative measures of dynamic heterogeneity that are standard tools for considering glassy systems. Specifically, we carry out an analysis in terms of mobility fields and distributions of persistence times and exchange times. In so doing, we show that dynamics in these systems is facilitated by transient disorder in frustrated two-dimensional hydrogen bonding networks. The frustration is the result of unfavorable geometry imposed by strong metal-water bonding. The geometry depends upon the structure of the underlying metal surface. Dynamic heterogeneity of water on the Pt 111 surface is therefore qualitatively different than that for water on the Pt 100 surface. In both cases, statistics of this ad-layer dynamic heterogeneity responds asymmetrically to applied voltage.

  15. Characterizing heterogeneous dynamics at hydrated electrode surfaces.

    Science.gov (United States)

    Willard, Adam P; Limmer, David T; Madden, Paul A; Chandler, David

    2013-05-14

    In models of Pt 111 and Pt 100 surfaces in water, motions of molecules in the first hydration layer are spatially and temporally correlated. To interpret these collective motions, we apply quantitative measures of dynamic heterogeneity that are standard tools for considering glassy systems. Specifically, we carry out an analysis in terms of mobility fields and distributions of persistence times and exchange times. In so doing, we show that dynamics in these systems is facilitated by transient disorder in frustrated two-dimensional hydrogen bonding networks. The frustration is the result of unfavorable geometry imposed by strong metal-water bonding. The geometry depends upon the structure of the underlying metal surface. Dynamic heterogeneity of water on the Pt 111 surface is therefore qualitatively different than that for water on the Pt 100 surface. In both cases, statistics of this ad-layer dynamic heterogeneity responds asymmetrically to applied voltage.

  16. XML-based approaches for the integration of heterogeneous bio-molecular data.

    Science.gov (United States)

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  17. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  18. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  19. A database of worldwide glacier thickness observations

    DEFF Research Database (Denmark)

    Gärtner-Roer, I.; Naegeli, K.; Huss, M.

    2014-01-01

    One of the grand challenges in glacier research is to assess the total ice volume and its global distribution. Over the past few decades the compilation of a world glacier inventory has been well-advanced both in institutional set-up and in spatial coverage. The inventory is restricted to glacier...... the different estimation approaches. This initial database of glacier and ice caps thickness will hopefully be further enlarged and intensively used for a better understanding of the global glacier ice volume and its distribution....... surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, we present a standardized database of glacier thickness observations compiled by an extensive literature...... review and from airborne data extracted from NASA's Operation IceBridge. This database contains ice thickness observations from roughly 1100 glaciers and ice caps including 550 glacier-wide estimates and 750,000 point observations. A comparison of these observational ice thicknesses with results from...

  20. Study on evaluation method for heterogeneous sedimentary rocks based on forward model

    International Nuclear Information System (INIS)

    Masui, Yasuhiro; Kawada, Koji; Katoh, Arata; Tsuji, Takashi; Suwabe, Mizue

    2004-02-01

    It is very important to estimate the facies distribution of heterogeneous sedimentary rocks for geological disposal of high level radioactive waste. The heterogeneousness of sedimentary rocks is due to variable distribution of grain size and mineral composition. The objective of this study is to establish the evaluation method for heterogeneous sedimentary rocks based on forward model. This study consisted of geological study for Horonobe area and the development of soft wear for sedimentary model. Geological study was composed of following items. 1. The sedimentary system for Koetoi and Wakkanai formations in Horonobe area was compiled based on papers. 2. The cores of HDB-1 were observed mainly from sedimentological view. 3. The facies and compaction property of argillaceous rocks were studied based on physical logs and core analysis data of wells. 4. The structure maps, isochrone maps, isopach maps and restored geological sections were made. The soft wear for sedimentary model to show sedimentary system on a basin scale was developed. This soft wear estimates the facies distribution and hydraulic conductivity of sedimentary rocks on three dimensions scale by numerical simulation. (author)

  1. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  2. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  3. Sources of heterogeneity in studies of the BMI-mortality association.

    Science.gov (United States)

    Peter, Raphael Simon; Nagel, Gabriele

    2017-06-01

    To date, the amount of heterogeneity among studies of the body mass index-mortality association attributable to differences in the age distribution and length of follow-up has not been quantified. Therefore, we wanted to quantify the amount of heterogeneity attributable to age and follow-up in results of studies on the body mass index-mortality relation. We used optima of the body mass index mortality association reported for 30 populations and performed meta-regression to estimate the amount of heterogeneity attributable to sex, ethnicity, mean age at baseline, percentage smokers, and length of follow-up. Ethnicity as single factor accounted for 36% (95% CI, 11-56%) of heterogeneity. Mean age and length of follow-up had an interactive effect and together accounted for 56% (95% CI, 24-74%) of the remaining heterogeneity. Sex did not significantly contribute to the heterogeneity, after controlling for ethnicity, age, and length of follow-up. A considerable amount of heterogeneity in studies of the body mass index-mortality association is attributable to ethnicity, age, and length of follow-up. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  4. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  5. Biophysical, infrastructural and social heterogeneities explain spatial distribution of waterborne gastrointestinal disease burden in Mexico City

    Science.gov (United States)

    Baeza, Andrés; Estrada-Barón, Alejandra; Serrano-Candela, Fidel; Bojórquez, Luis A.; Eakin, Hallie; Escalante, Ana E.

    2018-06-01

    Due to unplanned growth, large extension and limited resources, most megacities in the developing world are vulnerable to hydrological hazards and infectious diseases caused by waterborne pathogens. Here we aim to elucidate the extent of the relation between the spatial heterogeneity of physical and socio-economic factors associated with hydrological hazards (flooding and scarcity) and the spatial distribution of gastrointestinal disease in Mexico City, a megacity with more than 8 million people. We applied spatial statistics and multivariate regression analyses to high resolution records of gastrointestinal diseases during two time frames (2007–2009 and 2010–2014). Results show a pattern of significant association between water flooding events and disease incidence in the city center (lowlands). We also found that in the periphery (highlands), higher incidence is generally associated with household infrastructure deficiency. Our findings suggest the need for integrated and spatially tailored interventions by public works and public health agencies, aimed to manage socio-hydrological vulnerability in Mexico City.

  6. Spatial heterogeneity of biofouling under different cross-flow velocities in reverse osmosis membrane systems

    KAUST Repository

    Farhat, Nadia

    2016-09-06

    The spatially heterogeneous distribution of biofouling in spiral wound membrane systems restricts (i) the water distribution over the membrane surface and therefore (ii) the membrane-based water treatment. The objective of the study was to assess the spatial heterogeneity of biofilm development over the membrane fouling simulator (MFS) length (inlet and outlet part) at three different cross-flow velocities (0.08, 0.12 and 0.16 m/s). The MFS contained sheets of membrane and feed spacer and simulated the first 0.20 m of spiral-wound membrane modules where biofouling accumulates the most in practice. In-situ non-destructive oxygen imaging using planar optodes was applied to determine the biofilm spatially resolved activity and heterogeneity.

  7. Physical heterogeneity control on effective mineral dissolution rates

    Science.gov (United States)

    Jung, Heewon; Navarre-Sitchler, Alexis

    2018-04-01

    Hydrologic heterogeneity may be an important factor contributing to the discrepancy in laboratory and field measured dissolution rates, but the governing factors influencing mineral dissolution rates among various representations of physical heterogeneity remain poorly understood. Here, we present multiple reactive transport simulations of anorthite dissolution in 2D latticed random permeability fields and link the information from local grid scale (1 cm or 4 m) dissolution rates to domain-scale (1m or 400 m) effective dissolution rates measured by the flux-weighted average of an ensemble of flow paths. We compare results of homogeneous models to heterogeneous models with different structure and layered permeability distributions within the model domain. Chemistry is simplified to a single dissolving primary mineral (anorthite) distributed homogeneously throughout the domain and a single secondary mineral (kaolinite) that is allowed to dissolve or precipitate. Results show that increasing size in correlation structure (i.e. long integral scales) and high variance in permeability distribution are two important factors inducing a reduction in effective mineral dissolution rates compared to homogeneous permeability domains. Larger correlation structures produce larger zones of low permeability where diffusion is an important transport mechanism. Due to the increased residence time under slow diffusive transport, the saturation state of a solute with respect to a reacting mineral approaches equilibrium and reduces the reaction rate. High variance in permeability distribution favorably develops large low permeability zones that intensifies the reduction in mixing and effective dissolution rate. However, the degree of reduction in effective dissolution rate observed in 1 m × 1 m domains is too small (equilibrium conditions reduce the effective dissolution rate by increasing the saturation state. However, in large domains where less- or non-reactive zones develop, higher

  8. Variable EBV DNA Load Distributions and Heterogeneous EBV mRNA Expression Patterns in the Circulation of Solid Organ versus Stem Cell Transplant Recipients

    Directory of Open Access Journals (Sweden)

    A. E. Greijer

    2012-01-01

    Full Text Available Epstein-Barr virus (EBV driven post-transplant lymphoproliferative disease (PTLD is a heterogeneous and potentially life-threatening condition. Early identification of aberrant EBV activity may prevent progression to B-cell lymphoma. We measured EBV DNA load and RNA profiles in plasma and cellular blood compartments of stem cell transplant (SCT; n=5, solid organ transplant recipients (SOT; n=15, and SOT having chronic elevated EBV-DNA load (n=12. In SCT, EBV DNA was heterogeneously distributed, either in plasma or leukocytes or both. In SOT, EBV DNA load was always cell associated, predominantly in B cells, but occasionally in T cells (CD4 and CD8 or monocytes. All SCT with cell-associated EBV DNA showed BARTs and EBNA1 expression, while LMP1 and LMP2 mRNA was found in 1 and 3 cases, respectively. In SOT, expression of BARTs was detected in all leukocyte samples. LMP2 and EBNA1 mRNA was found in 5/15 and 2/15, respectively, but LMP1 mRNA in only 1, coinciding with severe PTLD and high EBV DNA. Conclusion: EBV DNA is differently distributed between white cells and plasma in SOT versus SCT. EBV RNA profiling in blood is feasible and may have added value for understanding pathogenic virus activity in patients with elevated EBV-DNA.

  9. A heterogeneous boron distribution in soil influences the poplar root system architecture development

    Science.gov (United States)

    Rees, R.; Robinson, B. H.; Hartmann, S.; Lehmann, E.; Schulin, R.

    2009-04-01

    Poplars are well suited for the phytomanagement of boron (B)-contaminated sites, due to their high transpiration rate and tolerance to elevated soil B concentrations. However, the uptake and the fate of B in poplar stands are not well understood. This information is crucial to improve the design of phytomanagement systems, where the primary role of poplars is to reduce B leaching by reducing the water flux through the contaminated material. Like other trace elements, B occurs heterogeneously in soils. Concentrations can differ up to an order of magnitude within centimetres. These gradients affect plant root growth and thus via preferential flow along the roots water and mass transport in soils to ground and surface waters. Generally there are three possible reactions of plant roots to patches with elevated trace element concentrations in soils: indifference, avoidance, or foraging. While avoidance or indifference might seem to be the most obvious strategies, foraging cannot be excluded a priori, because of the high demand of poplars for B compared to other tree species. We aimed to determine the rooting strategies of poplars in soils where B is either homo- or heterogeneously distributed. We planted 5 cm cuttings of Populus tremula var. Birmensdorf clones in aluminum (Al) containers with internal dimensions of 64 x 67 x 1.2 cm. The soil used was subsoil from northern Switzerland with a naturally low B and organic C concentration. We setup two treatments and a control with three replicates each. We spiked a bigger and a smaller portion of the soil with the same amount of B(OH)3-salt, in order to obtain soil concentrations of 7.5 mg B kg-1 and 20 mg B kg-1. We filled the containers with (a) un-spiked soil, (b) the 7.5 mg B kg-1 soil and (c) heterogeneously. The heterogeneous treatment consisted of one third 20 mg B kg-1 soil and two thirds control soil. We grew the poplars in a small greenhouse over 2 months and from then on in a climate chamber for another 3 months

  10. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    Science.gov (United States)

    Xu, Maoqi; Chen, Liang

    2018-01-01

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Using XML technology for the ontology-based semantic integration of life science databases.

    Science.gov (United States)

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  12. Spatial heterogeneity of plant-soil feedback affects root interactions and interspecific competition.

    Science.gov (United States)

    Hendriks, Marloes; Ravenek, Janneke M; Smit-Tiekstra, Annemiek E; van der Paauw, Jan Willem; de Caluwe, Hannie; van der Putten, Wim H; de Kroon, Hans; Mommer, Liesje

    2015-08-01

    Plant-soil feedback is receiving increasing interest as a factor influencing plant competition and species coexistence in grasslands. However, we do not know how spatial distribution of plant-soil feedback affects plant below-ground interactions. We investigated the way in which spatial heterogeneity of soil biota affects competitive interactions in grassland plant species. We performed a pairwise competition experiment combined with heterogeneous distribution of soil biota using four grassland plant species and their soil biota. Patches were applied as quadrants of 'own' and 'foreign' soils from all plant species in all pairwise combinations. To evaluate interspecific root responses, species-specific root biomass was quantified using real-time PCR. All plant species suffered negative soil feedback, but strength was species-specific, reflected by a decrease in root growth in own compared with foreign soil. Reduction in root growth in own patches by the superior plant competitor provided opportunities for inferior competitors to increase root biomass in these patches. These patterns did not cascade into above-ground effects during our experiment. We show that root distributions can be determined by spatial heterogeneity of soil biota, affecting plant below-ground competitive interactions. Thus, spatial heterogeneity of soil biota may contribute to plant species coexistence in species-rich grasslands. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  13. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    Science.gov (United States)

    2011-08-09

    heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale flop rates by the end of the decade. In this framework...and Lamb. Page 10 of 10 UNCLASSIFIED [3] Skaugen, K., Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com

  14. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  15. Dosimetric evaluation in heterogeneous tissue of anterior electron beam irradiation for treatment of retinoblastoma

    International Nuclear Information System (INIS)

    Kirsner, S.M.; Hogstrom, K.R.; Kurup, R.G.; Moyers, M.F.

    1987-01-01

    A dosimetric study of anterior electron beam irradiation for treatment of retinoblastoma was performed to evaluate the influence of tissue heterogeneities on the dose distribution within the eye and the accuracy of the dose calculated by a pencil beam algorithm. Film measurements were made in a variety of polystyrene phantoms and in a removable polystyrene eye incorporated into a tissue substitute phantom constructed from a human skull. Measurements in polystyrene phantoms were used to demonstrate the algorithm's ability to predict the effect of a lens block placed in the beam, as well as the eye's irregular surface shape. The eye phantom was used to measure dose distributions within the eye in both the sagittal and transverse planes in order to test the algorithm's ability to predict the dose distribution when bony heterogeneities are present. Results show (1) that previous treatment planning conclusions based on flat, uniform phantoms for central-axis depth dose are adequate; (2) that a three-dimensional heterogeneity correction is required for accurate dose calculations; and (3) that if only a two-dimensional heterogeneity correction is used in calculating the dose, it is more accurate for the sagittal than the transverse plane

  16. Heterogeneous patterns enhancing static and dynamic texture classification

    International Nuclear Information System (INIS)

    Silva, Núbia Rosa da; Martinez Bruno, Odemir

    2013-01-01

    Some mixtures, such as colloids like milk, blood, and gelatin, have homogeneous appearance when viewed with the naked eye, however, to observe them at the nanoscale is possible to understand the heterogeneity of its components. The same phenomenon can occur in pattern recognition in which it is possible to see heterogeneous patterns in texture images. However, current methods of texture analysis can not adequately describe such heterogeneous patterns. Common methods used by researchers analyse the image information in a global way, taking all its features in an integrated manner. Furthermore, multi-scale analysis verifies the patterns at different scales, but still preserving the homogeneous analysis. On the other hand various methods use textons to represent the texture, breaking texture down into its smallest unit. To tackle this problem, we propose a method to identify texture patterns not small as textons at distinct scales enhancing the separability among different types of texture. We find sub patterns of texture according to the scale and then group similar patterns for a more refined analysis. Tests were performed in four static texture databases and one dynamical one. Results show that our method provide better classification rate compared with conventional approaches both in static and in dynamic texture.

  17. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  18. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Iov, Florin; Hägerling, Christian

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the qu......The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...

  19. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  20. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  1. Surface current double-heterogeneous multilayer multicell methodology

    International Nuclear Information System (INIS)

    Stepanek, J.; Segev, M.

    1991-01-01

    A surface current methodology is developed to respond to the need for treating the various levels of material heterogeneity in a double-heterogeneous multilayer multicell in processing neutron multigroup cross sections in the resonance as well as thermal energy range. First, the basic surface cosine current transport equations to calculate the energy-dependent neutron flux spatial distribution in the multilayered multicell are formulated. Slab, spherical and cylindrical geometries, as well as square and hexagonal lattices and pebble-bed configurations with white or reflective cell boundary conditions, are considered. Second, starting from the surface cosine-current formulation, a two-zone three-layer multicell formalism for reduction of heterogeneous flux expressions to equivalent homogeneous flux expression for table method was developed. This formalism allows an infinite, as well as a limited, number of second-heterogeneity cells within a partial first-heterogeneity cell layer to be considered. Also, the number of the first-and second-heterogeneity cell types is quite general. The 'outer' (right side) as well as 'inner' (left side) Dancoff probabilities can be calculated for any particular layer. An accurate, efficient, and compact interpolation procedure is developed to calculate the basic collision probabilities. These are transmission and escape probabilities for shells in slab, cylindrical, and spherical geometries, as well as Dancoff probabilities for cylinders in square and hexagonal lattices. The use of the interpolation procedure is exemplified in a multilayer multicell approximation for the Dancoff probability, enabling a routine evaluation of the equivalence-based shielded resonance integral in highly complex lattices of slab, cylindrical, or spherical cells. (author) 1 fig., 2 tabs., 10 refs

  2. The Earth's mantle in a microwave oven: thermal convection driven by a heterogeneous distribution of heat sources

    Science.gov (United States)

    Fourel, Loïc; Limare, Angela; Jaupart, Claude; Surducan, Emanoil; Farnetani, Cinzia G.; Kaminski, Edouard C.; Neamtu, Camelia; Surducan, Vasile

    2017-08-01

    Convective motions in silicate planets are largely driven by internal heat sources and secular cooling. The exact amount and distribution of heat sources in the Earth are poorly constrained and the latter is likely to change with time due to mixing and to the deformation of boundaries that separate different reservoirs. To improve our understanding of planetary-scale convection in these conditions, we have designed a new laboratory setup allowing a large range of heat source distributions. We illustrate the potential of our new technique with a study of an initially stratified fluid involving two layers with different physical properties and internal heat production rates. A modified microwave oven is used to generate a uniform radiation propagating through the fluids. Experimental fluids are solutions of hydroxyethyl cellulose and salt in water, such that salt increases both the density and the volumetric heating rate. We determine temperature and composition fields in 3D with non-invasive techniques. Two fluorescent dyes are used to determine temperature. A Nd:YAG planar laser beam excites fluorescence, and an optical system, involving a beam splitter and a set of colour filters, captures the fluorescence intensity distribution on two separate spectral bands. The ratio between the two intensities provides an instantaneous determination of temperature with an uncertainty of 5% (typically 1K). We quantify mixing processes by precisely tracking the interfaces separating the two fluids. These novel techniques allow new insights on the generation, morphology and evolution of large-scale heterogeneities in the Earth's lower mantle.

  3. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  4. Optimization of Hierarchically Scheduled Heterogeneous Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Traian; Pop, Paul; Eles, Petru

    2005-01-01

    We present an approach to the analysis and optimization of heterogeneous distributed embedded systems. The systems are heterogeneous not only in terms of hardware components, but also in terms of communication protocols and scheduling policies. When several scheduling policies share a resource......, they are organized in a hierarchy. In this paper, we address design problems that are characteristic to such hierarchically scheduled systems: assignment of scheduling policies to tasks, mapping of tasks to hardware components, and the scheduling of the activities. We present algorithms for solving these problems....... Our heuristics are able to find schedulable implementations under limited resources, achieving an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  5. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  6. Capturing spatial heterogeneity of soil organic carbon under changing climate

    Science.gov (United States)

    Mishra, U.; Fan, Z.; Jastrow, J. D.; Matamala, R.; Vitharana, U.

    2015-12-01

    The spatial heterogeneity of the land surface affects water, energy, and greenhouse gas exchanges with the atmosphere. Designing observation networks that capture land surface spatial heterogeneity is a critical scientific challenge. Here, we present a geospatial approach to capture the existing spatial heterogeneity of soil organic carbon (SOC) stocks across Alaska, USA. We used the standard deviation of 556 georeferenced SOC profiles previously compiled in Mishra and Riley (2015, Biogeosciences, 12:3993-4004) to calculate the number of observations that would be needed to reliably estimate Alaskan SOC stocks. This analysis indicated that 906 randomly distributed observation sites would be needed to quantify the mean value of SOC stocks across Alaska at a confidence interval of ± 5 kg m-2. We then used soil-forming factors (climate, topography, land cover types, surficial geology) to identify the locations of appropriately distributed observation sites by using the conditioned Latin hypercube sampling approach. Spatial correlation and variogram analyses demonstrated that the spatial structures of soil-forming factors were adequately represented by these 906 sites. Using the spatial correlation length of existing SOC observations, we identified 484 new observation sites would be needed to provide the best estimate of the present status of SOC stocks in Alaska. We then used average decadal projections (2020-2099) of precipitation, temperature, and length of growing season for three representative concentration pathway (RCP 4.5, 6.0, and 8.5) scenarios of the Intergovernmental Panel on Climate Change to investigate whether the location of identified observation sites will shift/change under future climate. Our results showed 12-41 additional observation sites (depending on emission scenarios) will be required to capture the impact of projected climatic conditions by 2100 on the spatial heterogeneity of Alaskan SOC stocks. Our results represent an ideal distribution

  7. Multiscale characteristics of mechanical and mineralogical heterogeneity using nanoindentation and Maps Mineralogy in Mancos Shale

    Science.gov (United States)

    Yoon, H.; Mook, W. M.; Dewers, T. A.

    2017-12-01

    Multiscale characteristics of textural and compositional (e.g., clay, cement, organics, etc.) heterogeneity profoundly influence the mechanical properties of shale. In particular, strongly anisotropic (i.e., laminated) heterogeneities are often observed to have a significant influence on hydrological and mechanical properties. In this work, we investigate a sample of the Cretaceous Mancos Shale to explore the importance of lamination, cements, organic content, and the spatial distribution of these characteristics. For compositional and structural characterization, the mineralogical distribution of thin core sample polished by ion-milling is analyzed using QEMSCAN® with MAPS MineralogyTM (developed by FEI Corporoation). Based on mineralogy and organic matter distribution, multi-scale nanoindentation testing was performed to directly link compositional heterogeneity to mechanical properties. With FIB-SEM (3D) and high-magnitude SEM (2D) images, key nanoindentation patterns are analyzed to evaluate elastic and plastic responses. Combined with MAPs Mineralogy data and fine-resolution BSE images, nanoindentation results are explained as a function of compositional and structural heterogeneity. Finite element modeling is used to quantitatively evaluate the link between the heterogeneity and mechanical behavior during nanoindentation. In addition, the spatial distribution of compositional heterogeneity, anisotropic bedding patterns, and mechanical anisotropy are employed as inputs for multiscale brittle fracture simulations using a phase field model. Comparison of experimental and numerical simulations reveal that proper incorporation of additional material information, such as bedding layer thickness and other geometrical attributes of the microstructures, may yield improvements on the numerical predictions of the mesoscale fracture patterns and hence the macroscopic effective toughness. Sandia National Laboratories is a multimission laboratory managed and operated by

  8. Plio-Pleistocene climate change and geographic heterogeneity in plant diversity-environment relationships

    DEFF Research Database (Denmark)

    Svenning, J.-C.; Normand, Signe; Skov, Flemming

    2009-01-01

    Plio-Pleistocene climate change may have induced geographic heterogeneity in plant species richness-environment relationships in Europe due to greater in situ species survival and speciation rates in southern Europe. We formulate distinct hypotheses on how Plio-Pleistocene climate change may have...... affected richness-topographic heterogeneity and richness-water-energy availability relationships, causing steeper relationships in southern Europe. We investigated these hypotheses using data from Atlas Florae Europaeae on the distribution of 3069 species and geographically weighted regression (GWR). Our...... analyses showed that plant species richness generally increased with topographic heterogeneity (ln-transformed altitudinal range) and actual evapotranspiration (AET). We also found evidence for strong geographic heterogeneity in the species richness-environment relationship, with a greater increase...

  9. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  10. i-Genome: A database to summarize oligonucleotide data in genomes

    Directory of Open Access Journals (Sweden)

    Chang Yu-Chung

    2004-10-01

    Full Text Available Abstract Background Information on the occurrence of sequence features in genomes is crucial to comparative genomics, evolutionary analysis, the analyses of regulatory sequences and the quantitative evaluation of sequences. Computing the frequencies and the occurrences of a pattern in complete genomes is time-consuming. Results The proposed database provides information about sequence features generated by exhaustively computing the sequences of the complete genome. The repetitive elements in the eukaryotic genomes, such as LINEs, SINEs, Alu and LTR, are obtained from Repbase. The database supports various complete genomes including human, yeast, worm, and 128 microbial genomes. Conclusions This investigation presents and implements an efficiently computational approach to accumulate the occurrences of the oligonucleotides or patterns in complete genomes. A database is established to maintain the information of the sequence features, including the distributions of oligonucleotide, the gene distribution, the distribution of repetitive elements in genomes and the occurrences of the oligonucleotides. The database can provide more effective and efficient way to access the repetitive features in genomes.

  11. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  12. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  13. Long-term Differences in Tillage and Land Use Affect Intra-aggregate Pore Heterogeneity

    International Nuclear Information System (INIS)

    Kravchenko, A.N.; Wang, A.N.W.; Smucker, A.J.M.; Rivers, M.L.

    2011-01-01

    Recent advances in computed tomography provide measurement tools to study internal structures of soil aggregates at micrometer resolutions and to improve our understanding of specific mechanisms of various soil processes. Fractal analysis is one of the data analysis tools that can be helpful in evaluating heterogeneity of the intra-aggregate internal structures. The goal of this study was to examine how long-term tillage and land use differences affect intra-aggregate pore heterogeneity. The specific objectives were: (i) to develop an approach to enhance utility of box-counting fractal dimension in characterizing intra-aggregate pore heterogeneity; (ii) to examine intra-aggregate pores in macro-aggregates (4-6 mm in size) using the computed tomography scanning and fractal analysis, and (iii) to compare heterogeneity of intra-aggregate pore space in aggregates from loamy Alfisol soil subjected to 20 yr of contrasting management practices, namely, conventional tillage (chisel plow) (CT), no-till (NT), and native succession vegetation (NS). Three-dimensional images of the intact aggregates were obtained with a resolution of 14.6 (micro)m at the Advanced Photon Source, Argonne National Laboratory, Argonne, IL. Proposed box-counting fractal dimension normalization was successfully implemented to estimate heterogeneity of pore voxel distributions without bias associated with different porosities in soil aggregates. The aggregates from all three studied treatments had higher porosity associated with large (>100 (micro)m) pores present in their centers than in their exteriors. Pores 15 to 60 (micro)m were equally abundant throughout entire aggregates but their distributions were more heterogeneous in aggregate interiors. The CT aggregates had greater numbers of pores 15 to 60 (micro)m than NT and NS. Distribution of pore voxels belonging to large pores was most heterogeneous in the aggregates from NS, followed by NT and by CT. This result was consistent with presence of

  14. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  15. A Novel Energy-Aware Distributed Clustering Algorithm for Heterogeneous Wireless Sensor Networks in the Mobile Environment.

    Science.gov (United States)

    Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong

    2015-12-10

    In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network's running and the degree of candidate nodes' effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime.

  16. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  17. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  18. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  19. RTDB: A memory resident real-time object database

    International Nuclear Information System (INIS)

    Nogiec, Jerzy M.; Desavouret, Eugene

    2003-01-01

    RTDB is a fast, memory-resident object database with built-in support for distribution. It constitutes an attractive alternative for architecting real-time solutions with multiple, possibly distributed, processes or agents sharing data. RTDB offers both direct and navigational access to stored objects, with local and remote random access by object identifiers, and immediate direct access via object indices. The database supports transparent access to objects stored in multiple collaborating dispersed databases and includes a built-in cache mechanism that allows for keeping local copies of remote objects, with specifiable invalidation deadlines. Additional features of RTDB include a trigger mechanism on objects that allows for issuing events or activating handlers when objects are accessed or modified and a very fast, attribute based search/query mechanism. The overall architecture and application of RTDB in a control and monitoring system is presented

  20. Formation Learning Control of Multiple Autonomous Underwater Vehicles With Heterogeneous Nonlinear Uncertain Dynamics.

    Science.gov (United States)

    Yuan, Chengzhi; Licht, Stephen; He, Haibo

    2017-09-26

    In this paper, a new concept of formation learning control is introduced to the field of formation control of multiple autonomous underwater vehicles (AUVs), which specifies a joint objective of distributed formation tracking control and learning/identification of nonlinear uncertain AUV dynamics. A novel two-layer distributed formation learning control scheme is proposed, which consists of an upper-layer distributed adaptive observer and a lower-layer decentralized deterministic learning controller. This new formation learning control scheme advances existing techniques in three important ways: 1) the multi-AUV system under consideration has heterogeneous nonlinear uncertain dynamics; 2) the formation learning control protocol can be designed and implemented by each local AUV agent in a fully distributed fashion without using any global information; and 3) in addition to the formation control performance, the distributed control protocol is also capable of accurately identifying the AUVs' heterogeneous nonlinear uncertain dynamics and utilizing experiences to improve formation control performance. Extensive simulations have been conducted to demonstrate the effectiveness of the proposed results.

  1. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  2. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    Science.gov (United States)

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  3. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  4. Cluster-based service discovery for heterogeneous wireless sensor networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Scholten, Johan; Havinga, Paul J.M.; Hartel, Pieter H.

    2007-01-01

    We propose an energy-efficient service discovery protocol for heterogeneous wireless sensor networks. Our solution exploits a cluster overlay, where the clusterhead nodes form a distributed service registry. A service lookup results in visiting only the clusterhead nodes. We aim for minimizing the

  5. Distributing Knight. Using Type-Based Publish/Subscribe for Building Distributed Collaboration Tools

    DEFF Research Database (Denmark)

    Damm, Christian Heide; Hansen, Klaus Marius

    2002-01-01

    more important. We present Distributed Knight, an extension to the Knight tool, for distributed, collaborative, and gesture-based object-oriented modelling. Distributed Knight was built using the type-based publish/subscribe paradigm. Based on this case, we argue that type-based publish......Distributed applications are hard to understand, build, and evolve. The need for decoupling, flexibility, and heterogeneity in distributed collaboration tools present particular problems; for such applications, having the right abstractions and primitives for distributed communication becomes even....../subscribe provides a natural and effective abstraction for developing distributed collaboration tools....

  6. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  7. Epigenetic and conventional regulation is distributed among activators of FLO11 allowing tuning of population-level heterogeneity in its expression.

    Directory of Open Access Journals (Sweden)

    Leah M Octavio

    2009-10-01

    Full Text Available Epigenetic switches encode their state information either locally, often via covalent modification of DNA or histones, or globally, usually in the level of a trans-regulatory factor. Here we examine how the regulation of cis-encoded epigenetic switches controls the extent of heterogeneity in gene expression, which is ultimately tied to phenotypic diversity in a population. We show that two copies of the FLO11 locus in Saccharomyces cerevisiae switch between a silenced and competent promoter state in a random and independent fashion, implying that the molecular event leading to the transition occurs locally at the promoter, in cis. We further quantify the effect of trans regulators both on the slow epigenetic transitions between a silenced and competent promoter state and on the fast promoter transitions associated with conventional regulation of FLO11. We find different classes of regulators affect epigenetic, conventional, or both forms of regulation. Distributing kinetic control of epigenetic silencing and conventional gene activation offers cells flexibility in shaping the distribution of gene expression and phenotype within a population.

  8. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  9. Quantification of type I error probabilities for heterogeneity LOD scores.

    Science.gov (United States)

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  10. The Molecular Signatures Database (MSigDB) hallmark gene set collection.

    Science.gov (United States)

    Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo

    2015-12-23

    The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.

  11. Analytic Coarse-Mesh Finite-Difference Method Generalized for Heterogeneous Multidimensional Two-Group Diffusion Calculations

    International Nuclear Information System (INIS)

    Garcia-Herranz, Nuria; Cabellos, Oscar; Aragones, Jose M.; Ahnert, Carol

    2003-01-01

    In order to take into account in a more effective and accurate way the intranodal heterogeneities in coarse-mesh finite-difference (CMFD) methods, a new equivalent parameter generation methodology has been developed and tested. This methodology accounts for the dependence of the nodal homogeneized two-group cross sections and nodal coupling factors, with interface flux discontinuity (IFD) factors that account for heterogeneities on the flux-spectrum and burnup intranodal distributions as well as on neighbor effects.The methodology has been implemented in an analytic CMFD method, rigorously obtained for homogeneous nodes with transverse leakage and generalized now for heterogeneous nodes by including IFD heterogeneity factors. When intranodal mesh node heterogeneity vanishes, the heterogeneous solution tends to the analytic homogeneous nodal solution. On the other hand, when intranodal heterogeneity increases, a high accuracy is maintained since the linear and nonlinear feedbacks on equivalent parameters have been shown to be as a very effective way of accounting for heterogeneity effects in two-group multidimensional coarse-mesh diffusion calculations

  12. Optimal Routing for Heterogeneous Fixed Fleets of Multicompartment Vehicles

    OpenAIRE

    Wang, Qian; Ji, Qingkai; Chiu, Chun-Hung

    2014-01-01

    We present a metaheuristic called the reactive guided tabu search (RGTS) to solve the heterogeneous fleet multicompartment vehicle routing problem (MCVRP), where a single vehicle is required for cotransporting multiple customer orders. MCVRP is commonly found in delivery of fashion apparel, petroleum distribution, food distribution, and waste collection. In searching the optimum solution of MCVRP, we need to handle a large amount of local optima in the solution spaces. To overcome this proble...

  13. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  14. Geological entropy and solute transport in heterogeneous porous media

    Science.gov (United States)

    Bianchi, Marco; Pedretti, Daniele

    2017-06-01

    We propose a novel approach to link solute transport behavior to the physical heterogeneity of the aquifer, which we fully characterize with two measurable parameters: the variance of the log K values (σY2), and a new indicator (HR) that integrates multiple properties of the K field into a global measure of spatial disorder or geological entropy. From the results of a detailed numerical experiment considering solute transport in K fields representing realistic distributions of hydrofacies in alluvial aquifers, we identify empirical relationship between the two parameters and the first three central moments of the distributions of arrival times of solute particles at a selected control plane. The analysis of experimental data indicates that the mean and the variance of the solutes arrival times tend to increase with spatial disorder (i.e., HR increasing), while highly skewed distributions are observed in more orderly structures (i.e., HR decreasing) or at higher σY2. We found that simple closed-form empirical expressions of the bivariate dependency of skewness on HR and σY2 can be used to predict the emergence of non-Fickian transport in K fields considering a range of structures and heterogeneity levels, some of which based on documented real aquifers. The accuracy of these predictions and in general the results from this study indicate that a description of the global variability and structure of the K field in terms of variance and geological entropy offers a valid and broadly applicable approach for the interpretation and prediction of transport in heterogeneous porous media.

  15. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    Science.gov (United States)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  16. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  17. Scaling Effects of Cr(VI) Reduction Kinetics. The Role of Geochemical Heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li [Pennsylvania State Univ., State College, PA (United States); Li, Li [Pennsylvania State Univ., State College, PA (United States)

    2015-10-22

    The natural subsurface is highly heterogeneous with minerals distributed in different spatial patterns. Fundamental understanding of how mineral spatial distribution patterns regulate sorption process is important for predicting the transport and fate of chemicals. Existing studies about the sorption was carried out in well-mixed batch reactors or uniformly packed columns, with few data available on the effects of spatial heterogeneities. As a result, there is a lack of data and understanding on how spatial heterogeneities control sorption processes. In this project, we aim to understand and develop modeling capabilities to predict the sorption of Cr(VI), an omnipresent contaminant in natural systems due to its natural occurrence and industrial utilization. We systematically examine the role of spatial patterns of illite, a common clay, in determining the extent of transport limitation and scaling effects associated with Cr(VI) sorption capacity and kinetics using column experiments and reactive transport modeling. Our results showed that the sorbed mass and rates can differ by an order of magnitude due to of the illite spatial heterogeneities and transport limitation. With constraints from data, we also developed the capabilities of modeling Cr(VI) in heterogeneous media. The developed model is then utilized to understand the general principles that govern the relationship between sorption and connectivity, a key measure of the spatial pattern characteristics. This correlation can be used to estimate Cr(VI) sorption characteristics in heterogeneous porous media. Insights gained here bridge gaps between laboratory and field application in hydrogeology and geochemical field, and advance predictive understanding of reactive transport processes in the natural heterogeneous subsurface. We believe that these findings will be of interest to a large number of environmental geochemists and engineers, hydrogeologists, and those interested in contaminant fate and transport

  18. Mosquito population regulation and larval source management in heterogeneous environments.

    Directory of Open Access Journals (Sweden)

    David L Smith

    Full Text Available An important question for mosquito population dynamics, mosquito-borne pathogen transmission and vector control is how mosquito populations are regulated. Here we develop simple models with heterogeneity in egg laying patterns and in the responses of larval populations to crowding in aquatic habitats. We use the models to evaluate how such heterogeneity affects mosquito population regulation and the effects of larval source management (LSM. We revisit the notion of a carrying capacity and show how heterogeneity changes our understanding of density dependence and the outcome of LSM. Crowding in and productivity of aquatic habitats is highly uneven unless egg-laying distributions are fine-tuned to match the distribution of habitats' carrying capacities. LSM reduces mosquito population density linearly with coverage if adult mosquitoes avoid laying eggs in treated habitats, but quadratically if eggs are laid in treated habitats and the effort is therefore wasted (i.e., treating 50% of habitat reduces mosquito density by approximately 75%. Unsurprisingly, targeting (i.e. treating a subset of the most productive pools gives much larger reductions for similar coverage, but with poor targeting, increasing coverage could increase adult mosquito population densities if eggs are laid in higher capacity habitats. Our analysis suggests that, in some contexts, LSM models that accounts for heterogeneity in production of adult mosquitoes provide theoretical support for pursuing mosquito-borne disease prevention through strategic and repeated application of modern larvicides.

  19. Evaluation of permeability of compacted bentonite ground considering heterogeneity by geostatistics

    International Nuclear Information System (INIS)

    Tanaka, Yukihisa; Nakamura, Kunihiko; Kudo, Kohji; Hironaga, Michihiko; Nakagami, Motonori; Niwase, Kazuhito; Komatsu, Shin-ichi

    2007-01-01

    The permeability of the bentonite ground as an engineered barrier is possibly designed to the value which is lower than that determined in terms of required performance because of heterogeneous distribution of permeability in the ground, which might be considerable when the ground is created by the compaction method. The effect of heterogeneity in the ground on the permeability of the bentonite ground should be evaluated by overall permeability of the ground, whereas in practice, the effect is evaluated by the distribution of permeability in the ground. Thus, in this study, overall permeability of the bentonite ground is evaluated from the permeability of the bentonite ground is evaluated from the permeability distribution determined using the geostatistical method with the dry density data as well as permeability data of the undisturbed sample recovered from the bentonite ground. Consequently, it was proved through this study that possibility of overestimation of permeability of the bentonite ground can be reduced if the overall permeability is used. (author)

  20. Evaluation of the Accuracy of Polymer Gels for Determining Electron Dose Distributions in the Presence of Small Heterogeneities.

    Science.gov (United States)

    Asl, R Ghahraman; Nedaie, H A; Banaee, N

    2017-12-01

    The aim of this study is to evaluate the application and accuracy of polymer gels for determining electron dose distributions in the presence of small heterogeneities made of bone and air. Different cylindrical phantoms containing MAGIC (Methacrylic and Ascorbic acid in Gelatin Initiated by Copper) normoxic polymer gel were used under the slab phantoms during irradiation. MR images of the irradiated gel phantoms were obtained to determine their R2 (spin-spin) relaxation maps for conversion to absorbed dose. One- and 2-dimensional lateral dose profiles were acquired at depths of 1 and 4 cm for 8 and 15 MeV electron beams. The results were compared with the doses measured by a diode detector at the same positions. In addition, the dose distribution in the axial orientation was measured by the gel dosimeter. The slope and intercept for the R2 versus dose curve were 0.509 ± 0.002 Gy s and 4.581 ± 0.005 s, respectively. No significant variation in dose-R2 response was seen for the two electron energies within the applied dose ranges. The mean dose difference between the measured gel dose profiles was smaller than 3% compared to those measured by the diode detector. These results provide further demonstration that electron dose distributions are significantly altered in the presence of tissue inhomogeneities such as bone and air cavity and that MAGIC gel is a useful tool for 3-dimensional dose visualization and qualitative assessment of tissue inhomogeneity effects in electron beam dosimetry.

  1. The UDEPO database of the International Atomic Energy Agency

    International Nuclear Information System (INIS)

    Bruneton, P.

    2009-01-01

    The author presents the work performed and data collected by the IAEA on uranium deposits in the world. Several documents have been published: a map called 'World Distribution of Uranium Deposits', a guidebook to the map with brief descriptions of 582 deposits. These deposits have been classified according to 14 different types which led to the development of a database, UDEPO (World Distribution of Uranium Deposits). As uranium exploration activities started again, new data have been published in 2003. A web site has been created. In 2009, 1176 deposits were present in the database along with many geographical, geological and technical parameters. Maps, photos, plans and drawings may also be present in the database. But some data are either not present because of the will of some countries, or not verified

  2. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    International Nuclear Information System (INIS)

    Li Jun; Altschuler, Martin D; Hahn, Stephen M; Zhu, Timothy C

    2008-01-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the

  3. Changing the scale of hydrogeophysical aquifer heterogeneity characterization

    Science.gov (United States)

    Paradis, Daniel; Tremblay, Laurie; Ruggeri, Paolo; Brunet, Patrick; Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Holliger, Klaus; Irving, James; Molson, John; Lefebvre, Rene

    2015-04-01

    Contaminant remediation and management require the quantitative predictive capabilities of groundwater flow and mass transport numerical models. Such models have to encompass source zones and receptors, and thus typically cover several square kilometers. To predict the path and fate of contaminant plumes, these models have to represent the heterogeneous distribution of hydraulic conductivity (K). However, hydrogeophysics has generally been used to image relatively restricted areas of the subsurface (small fractions of km2), so there is a need for approaches defining heterogeneity at larger scales and providing data to constrain conceptual and numerical models of aquifer systems. This communication describes a workflow defining aquifer heterogeneity that was applied over a 12 km2 sub-watershed surrounding a decommissioned landfill emitting landfill leachate. The aquifer is a shallow, 10 to 20 m thick, highly heterogeneous and anisotropic assemblage of littoral sand and silt. Field work involved the acquisition of a broad range of data: geological, hydraulic, geophysical, and geochemical. The emphasis was put on high resolution and continuous hydrogeophysical data, the use of direct-push fully-screened wells and the acquisition of targeted high-resolution hydraulic data covering the range of observed aquifer materials. The main methods were: 1) surface geophysics (ground-penetrating radar and electrical resistivity); 2) direct-push operations with a geotechnical drilling rig (cone penetration tests with soil moisture resistivity CPT/SMR; full-screen well installation); and 3) borehole operations, including high-resolution hydraulic tests and geochemical sampling. New methods were developed to acquire high vertical resolution hydraulic data in direct-push wells, including both vertical and horizontal K (Kv and Kh). Various data integration approaches were used to represent aquifer properties in 1D, 2D and 3D. Using relevant vector machines (RVM), the mechanical and

  4. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  5. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  6. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  7. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  8. An advanced method of heterogeneous reactor theory

    International Nuclear Information System (INIS)

    Kochurov, B.P.

    1994-08-01

    Recent approaches to heterogeneous reactor theory for numerical applications were presented in the course of 8 lectures given in JAERI. The limitations of initial theory known after the First Conference on Peacefull Uses of Atomic Energy held in Geneva in 1955 as Galanine-Feinberg heterogeneous theory:-matrix from of equations, -lack of consistent theory for heterogeneous parameters for reactor cell, -were overcome by a transformation of heterogeneous reactor equations to a difference form and by a development of a consistent theory for the characteristics of a reactor cell based on detailed space-energy calculations. General few group (G-number of groups) heterogeneous reactor equations in dipole approximation are formulated with the extension of two-dimensional problem to three-dimensions by finite Furie expansion of axial dependence of neutron fluxes. A transformation of initial matrix reactor equations to a difference form is presented. The methods for calculation of heterogeneous reactor cell characteristics giving the relation between vector-flux and vector-current on a cell boundary are based on a set of detailed space-energy neutron flux distribution calculations with zero current across cell boundary and G calculations with linearly independent currents across the cell boundary. The equations for reaction rate matrices are formulated. Specific methods were developed for description of neutron migration in axial and radial directions. The methods for resonance level's approach for numerous high-energy resonances. On the basis of these approaches the theory, methods and computer codes were developed for 3D space-time react or problems including simulation of slow processes with fuel burn-up, control rod movements, Xe poisoning and fast transients depending on prompt and delayed neutrons. As a result reactors with several thousands of channels having non-uniform axial structure can be feasibly treated. (author)

  9. Representative measurement of two-dimensional reactive phosphate distributions and co-distributed iron(II) and sulfide in seagrass sediment porewaters

    DEFF Research Database (Denmark)

    Pagès, Anaïs; Teasdale, Peter R.; Robertson, David

    2011-01-01

    The high degree of heterogeneity within sediments can make interpreting one-dimensional measurements difficult. The recent development and use of in situ techniques that measure two-dimensional distributions of porewater solutes have facilitated investigation of the role of spatial heterogeneity ...

  10. Building Vietnamese Herbal Database Towards Big Data Science in Nature-Based Medicine

    Science.gov (United States)

    2018-01-04

    online and hard-copied references). Text mining is planned before DISTRIBUTION A. Approved for public release: distribution unlimited. hand in the...many types of diseases. Poor hand-writing records and current text -based databases, however, perplex the conventionalizing and evaluating process of...remedy for many types of diseases. Poor hand-writing records and current text -based databases, however, perplex the conventionalizing and evaluating

  11. A FAST AND ELITIST BI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR SCHEDULING INDEPENDENT TASKS ON HETEROGENEOUS SYSTEMS

    Directory of Open Access Journals (Sweden)

    G.Subashini

    2010-07-01

    Full Text Available To meet the increasing computational demands, geographically distributed resources need to be logically coupled to make them work as a unified resource. In analyzing the performance of such distributed heterogeneous computing systems scheduling a set of tasks to the available set of resources for execution is highly important. Task scheduling being an NP-complete problem, use of metaheuristics is more appropriate in obtaining optimal solutions. Schedules thus obtained can be evaluated using several criteria that may conflict with one another which require multi objective problem formulation. This paper investigates the application of an elitist Nondominated Sorting Genetic Algorithm (NSGA-II, to efficiently schedule a set of independent tasks in a heterogeneous distributed computing system. The objectives considered in this paper include minimizing makespan and average flowtime simultaneously. The implementation of NSGA-II algorithm and Weighted-Sum Genetic Algorithm (WSGA has been tested on benchmark instances for distributed heterogeneous systems. As NSGA-II generates a set of Pareto optimal solutions, to verify the effectiveness of NSGA-II over WSGA a fuzzy based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set.

  12. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Experimental validation of a new heterogeneous mechanical test design

    Science.gov (United States)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  14. Hepatocyte heterogeneity in the metabolism of amino acids and ammonia

    NARCIS (Netherlands)

    Häussinger, D.; Lamers, W. H.; Moorman, A. F.

    1992-01-01

    With respect to hepatocyte heterogeneity in ammonia and amino acid metabolism, two different patterns of sublobular gene expression are distinguished: 'gradient-type' and 'strict- or compartment-type' zonation. An example for strict-type zonation is the reciprocal distribution of carbamoylphosphate

  15. Estimates of Between-Study Heterogeneity for 705 Meta-Analyses Reported in Psychological Bulletin From 1990–2013

    Directory of Open Access Journals (Sweden)

    Sara van Erp

    2017-08-01

    Full Text Available We present a data set containing 705 between-study heterogeneity estimates τ2 as reported in 61 articles published in 'Psychological Bulletin' from 1990–2013. The data set also includes information about the number and type of effect sizes, the 'Q'- and 'I'2-statistics, and publication bias. The data set is stored in the Open Science Framework repository (https://osf.io/wyhve/ and can be used for several purposes: (1 to compare a specific heterogeneity estimate to the distribution of between-study heterogeneity estimates in psychology; (2 to construct an informed prior distribution for the between-study heterogeneity in psychology; (3 to obtain realistic population values for Monte Carlo simulations investigating the performance of meta-analytic methods.   Funding statement: This research was supported by the ERC project “Bayes or Bust”.

  16. Materials data through a bibliographic database INIS

    International Nuclear Information System (INIS)

    Yamamoto, Akira; Itabashi, Keizo; Nakajima, Hidemitsu

    1992-01-01

    INIS (International Nuclear Information System) is a bibliographic database produced by collaboration of IAEA and its member countries, holding 1,500,000 records as of 1991. Although a bibliographic database does not provide numerical data itself, specific materials information can be obtained through retrieval specifying materials, properties conditions, measuring methods, etc. Also, 'data flagging' facilitates searching a record containing data. INIS has also a function of clearing house that provides original documents of scarce distribution. Hard copies of the technical reports or other non-conventional literatures are available. An efficient use of INIS database for the materials data is presented using an on-line terminal. (author)

  17. Analysis and Synthesis of Communication-Intensive Heterogeneous Real-Time Systems

    DEFF Research Database (Denmark)

    Pop, Paul

    2003-01-01

    Embedded computer systems are now everywhere: from alarm clocks to PDAs, from mobile phones to cars, almost all the devices we use are controlled by embedded computer systems. An important class of embedded computer systems is that of real-time systems, which have to fulfill strict timing...... requirements. As realtime systems become more complex, they are often implemented using distributed heterogeneous architectures. The main objective of this thesis is to develop analysis and synthesis methods for communication-intensive heterogeneous hard real-time systems. The systems are heterogeneous...... is the synthesis of the communication infrastructure, which has a significant impact on the overall system performance and cost. To reduce the time-to-market of products, the design of real-time systems seldom starts from scratch. Typically, designers start from an already existing system, running certain...

  18. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high

  19. A database application for the Naval Command Physical Readiness Testing Program

    OpenAIRE

    Quinones, Frances M.

    1998-01-01

    Approved for public release; distribution is unlimited 1T21 envisions a Navy with tandardized, state-of-art computer systems. Based on this vision, Naval database management systems will also need to become standardized among Naval commands. Today most commercial off the shelf (COTS) database management systems provide a graphical user interface. Among the many Naval database systems currently in use, the Navy's Physical Readiness Program database has continued to exist at the command leve...

  20. Role of adenosine in regulating the heterogeneity of skeletal muscle blood flow during exercise in humans

    DEFF Research Database (Denmark)

    Heinonen, Ilkka; Nesterov, Sergey V; Kemppainen, Jukka

    2007-01-01

    receptor blockade. BF heterogeneity within muscles was calculated from 16-mm(3) voxels in BF images and heterogeneity among the muscles from the mean values of the four QF compartments. Mean BF in the whole QF and its four parts increased, and heterogeneity decreased with workload both without......Evidence from both animal and human studies suggests that adenosine plays a role in the regulation of exercise hyperemia in skeletal muscle. We tested whether adenosine also plays a role in the regulation of blood flow (BF) distribution and heterogeneity among and within quadriceps femoris (QF...... and with theophylline (P heterogeneity among the QF muscles, yet blockade increased within-muscle BF heterogeneity in all four QF muscles (P = 0.03). Taken together, these results show that BF becomes less heterogeneous with increasing...