WorldWideScience

Sample records for distributed heterogeneous database

  1. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  2. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  3. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  4. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  5. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  6. Replikasi Unidirectional pada Heterogen Database

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2013-12-01

    Full Text Available The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the interaction process is done through repeated. From this research it is obtained that the database replication technolgy using Oracle Golden Gate can be applied in heterogeneous environments in real time as well.

  7. Tradeoffs in distributed databases

    OpenAIRE

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  8. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  9. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  10. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  11. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  12. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  13. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  14. Ontology based heterogeneous materials database integration and semantic query

    Science.gov (United States)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  15. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  16. Heterogeneous Distribution of Chromium on Mercury

    Science.gov (United States)

    Nittler, L. R.; Boujibar, A.; Crapster-Pregont, E.; Frank, E. A.; McCoy, T. J.; McCubbin, F. M.; Starr, R. D.; Vander Kaaden, K. E.; Vorburger, A.; Weider, S. Z.

    2018-05-01

    Mercury's surface has an average Cr/Si ratio of 0.003 (Cr 800 ppm), with at least a factor of 2 systematic uncertainty. Cr is heterogeneously distributed and correlated with Mg, Ca, S, and Fe and anti-correlated with Al.

  17. Integrating CLIPS applications into heterogeneous distributed systems

    Science.gov (United States)

    Adler, Richard M.

    1991-01-01

    SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

  18. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  19. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... is available. If data is unlabeled, then it is possible to generate keywords (in case of textual data) or key-patterns, as an informative representation of the obtained clusters. The methods are applied on simple artificial data sets, as well as collections of textual and medical data. In Danish: Denne ph...

  20. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  1. Database interfaces on NASA's heterogeneous distributed database system

    Science.gov (United States)

    Huang, Shou-Hsuan Stephen

    1989-01-01

    The syntax and semantics of all commands used in the template are described. Template builders should consult this document for proper commands in the template. Previous documents (Semiannual reports) described other aspects of this project. Appendix 1 contains all substituting commands used in the system. Appendix 2 includes all repeating commands. Appendix 3 is a collection of DEFINE templates from eight different DBMS's.

  2. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  3. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  4. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  5. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  6. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  7. Coordinated Collaboration between Heterogeneous Distributed Energy Resources

    Directory of Open Access Journals (Sweden)

    Shahin Abdollahy

    2014-01-01

    Full Text Available A power distribution feeder, where a heterogeneous set of distributed energy resources is deployed, is examined by simulation. The energy resources include PV, battery storage, natural gas GenSet, fuel cells, and active thermal storage for commercial buildings. The resource scenario considered is one that may exist in a not too distant future. Two cases of interaction between different resources are examined. One interaction involves a GenSet used to partially offset the duty cycle of a smoothing battery connected to a large PV system. The other example involves the coordination of twenty thermal storage devices, each associated with a commercial building. Storage devices are intended to provide maximum benefit to the building, but it is shown that this can have a deleterious effect on the overall system, unless the action of the individual storage devices is coordinated. A network based approach is also introduced to calculate some type of effectiveness metric to all available resources which take part in coordinated operation. The main finding is that it is possible to achieve synergy between DERs on a system; however this required a unified strategy to coordinate the action of all devices in a decentralized way.

  8. The design of distributed database system for HIRFL

    International Nuclear Information System (INIS)

    Wang Hong; Huang Xinmin

    2004-01-01

    This paper is focused on a kind of distributed database system used in HIRFL distributed control system. The database of this distributed database system is established by SQL Server 2000, and its application system adopts the Client/Server model. Visual C ++ is used to develop the applications, and the application uses ODBC to access the database. (authors)

  9. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  10. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  11. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    Science.gov (United States)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  12. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    Science.gov (United States)

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  13. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  14. Optimistic protocol for partitioned distributed database systems

    International Nuclear Information System (INIS)

    Davidson, S.B.

    1982-01-01

    A protocol for transaction processing during partition failures is presented which guarantees mutual consistency between copies of data-items after repair is completed. The protocol is optimistic in that transactions are processed without restrictions during the failure; conflicts are detected at repair time using a precedence graph and are resolved by backing out transactions according to some backout strategy. The protocol is then evaluated using simulation and probabilistic modeling. In the simulation, several parameters are varied such as the number of transactions processed in a group, the type of transactions processed, the number of data-items present in the database, and the distribution of references to data-items. The simulation also uses different backout strategies. From these results we note conditions under which the protocol performs well, i.e., conditions under which the protocol backs out a small percentage of the transaction run. A probabilistic model is developed to estimate the expected number of transactions backed out using most of the above database and transaction parameters, and is shown to agree with simulation results. Suggestions are then made on how to improve the performance of the protocol. Insights gained from the simulation and probabilistic modeling are used to develop a backout strategy which takes into account individual transaction costs and attempts to minimize total backout cost. Although the problem of choosing transactions to minimize total backout cost is, in general, NP-complete, the backout strategy is efficient and produces very good results

  15. Measuring the effects of heterogeneity on distributed systems

    Science.gov (United States)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  16. A distributed scheduling algorithm for heterogeneous real-time systems

    Science.gov (United States)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  17. Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Karavakis, E. [CERN; Lammel, S. [Fermilab; Wildish, T. [Princeton U.

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then. In this paper we discuss the functionality and our experience of system deployment and operation on the full CMS scale.

  18. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  19. Effect of Heterogeneity in Initial Geographic Distribution on Opinions’ Competitiveness

    Directory of Open Access Journals (Sweden)

    Alexander S. Balankin

    2015-05-01

    Full Text Available Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly, in this work, we studied the effect of spatial heterogeneity on the majority rule dynamics using a three-state spin model, in which one state is neutral. Monte Carlo simulations were performed on square lattices divided into square blocks (cells. Accordingly, one competing opinion was distributed uniformly among cells, whereas the spatial distribution of the rival opinion was varied from the uniform to heterogeneous, with the median-to-mean ratio in the range from 1 to 0. When the size of discussion group is odd, the uncommitted agents disappear completely after  3.30 ± 0.05 update cycles, and then the system evolves in a two-state regime with complementary spatial distributions of two competing opinions. Even so, the initial heterogeneity in the spatial distribution of one of the competing opinions causes a decrease of this opinion competitiveness. That is, the opinion with initially heterogeneous spatial distribution has less probability to win, than the opinion with the initially uniform spatial distribution, even when the initial concentrations of both opinions are equal. We found that although the time to consensus , the opinion’s recession rate is determined during the first 3.3 update cycles. On the other hand, we found that the initial heterogeneity of the opinion spatial distribution assists the formation of quasi-stable regions, in which this opinion is dominant. The results of Monte Carlo simulations are discussed with regard to the electoral competition of political parties.

  20. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    OpenAIRE

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2012-01-01

    While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to ach...

  1. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  2. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  3. A performance study on the synchronisation of heterogeneous Grid databases using CONStanza

    CERN Document Server

    Pucciani, G; Domenici, Andrea; Stockinger, Heinz

    2010-01-01

    In Grid environments, several heterogeneous database management systems are used in various administrative domains. However, data exchange and synchronisation need to be available across different sites and different database systems. In this article we present our data consistency service CONStanza and give details on how we achieve relaxed update synchronisation between different database implementations. The integration in existing Grid environments is one of the major goals of the system. Performance tests have been executed following a factorial approach. Detailed experimental results and a statistical analysis are presented to evaluate the system components and drive future developments. (C) 2010 Elsevier B.V. All rights reserved.

  4. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  5. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  6. Obtaining contaminant arrival distributions for steady flow in heterogeneous systems

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The versatility of the new contaminant arrival distributions for determining environmental consequences of subsurface pollution problems is demonstrated through application to a field example involving land drainage in heterogeneous porous materials. Though the four phases of the hydrologic evaluations are complicated because of the material heterogeneity encountered in the field problem, the arrival distributions still effectively summarize the minimal amount of data required to determine the environmental implications. These arrival distributions yield a single graph or tabular set of data giving the consequences of the subsurface pollution problems. Accordingly, public control authorities would be well advised to request that the results of subsurface pollution investigations be provided in the form of arrival distributions and the resulting simpler summary curve or tabulation. Such an objective is most easily accomplished through compliance with the requirements for assuring a complete subsurface evaluation

  7. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  8. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  9. TCP isoeffect analysis using a heterogeneous distribution of radiosensitivity

    International Nuclear Information System (INIS)

    Carlone, Marco; Wilkins, David; Nyiri, Balazs; Raaphorst, Peter

    2004-01-01

    A formula for the α/β ratio is derived using the heterogeneous (population averaged) tumor control model. This formula is nearly identical to the formula obtained using the homogeneous (individual) tumor control model, but the new formula includes extra terms showing that the α/β ratio, the ratio of the mean value of α divided by the mean value of β that would be observed in a patient population, explicitly depends on the survival level and heterogeneity. The magnitude of this correction is estimated for prostate cancer, and this appears to raise the mean value of the ratio estimate by about 20%. The method also allows investigation of confidence limits for α/β based on a population distribution of radiosensitivity. For a widely heterogeneous population, the upper 95% confidence interval for the α/β ratio can be as high as 7.3 Gy, even though the population mean is between 2.3 and 2.6 Gy

  10. Distribution of model-based multipoint heterogeneity lod scores.

    Science.gov (United States)

    Xing, Chao; Morris, Nathan; Xing, Guan

    2010-12-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.

  11. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  12. A Methodology for Distributing the Corporate Database.

    Science.gov (United States)

    McFadden, Fred R.

    The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…

  13. Heterogeneous ice slurry flow and concentration distribution in horizontal pipes

    International Nuclear Information System (INIS)

    Wang, Jihong; Zhang, Tengfei; Wang, Shugang

    2013-01-01

    Highlights: • A Mixture CFD model is applied to describe heterogeneous ice slurry flow. • The ice slurry rheological behavior is considered piecewise. • The coupled flow and concentration profiles in heterogeneous slurry flow is acquired. • The current numerical model achieves good balance between precision and universality. -- Abstract: Ice slurry is an energy-intensive solid–liquid mixture fluid which may play an important role in various cooling purposes. Knowing detailed flow information is important from the system design point of view. However, the heterogeneous ice slurry flow makes it difficult to be quantified due to the complex two phase flow characteristic. The present study applies a Mixture computational fluid dynamics (CFD) model based on different rheological behavior to characterize the heterogeneous ice slurry flow. The Mixture CFD model was firstly validated by three different experiments. Then the validated Mixture CFD model was applied to solve the ice slurry isothermal flow by considering the rheological behavior piecewise. Finally, the numerical solutions have displayed the coupled flow information, such as slurry velocity, ice particle concentration and pressure drop distribution. The results show that, the ice slurry flow distribution will appear varying degree of asymmetry under different operating conditions. The rheological behavior will be affected by the asymmetric flow distributions. When mean flow velocity is high, Thomas equation can be appropriate for describing ice slurry viscosity. While with the decreasing of mean flow velocity, the ice slurry behaves Bingham rheology. As compared with experimental pressure drop results, the relative errors of numerical computation are almost within ±15%. The Mixture CFD model is validated to be an effective model for describing heterogeneous ice slurry flow and could supply plentiful flow information

  14. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  15. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  16. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  17. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  18. Cellular- and micro-dosimetry of heterogeneously distributed tritium.

    Science.gov (United States)

    Chao, Tsi-Chian; Wang, Chun-Ching; Li, Junli; Li, Chunyan; Tung, Chuan-Jong

    2012-01-01

    The assessment of radiotoxicity for heterogeneously distributed tritium should be based on the subcellular dose and relative biological effectiveness (RBE) for cell nucleus. In the present work, geometry-dependent absorbed dose and RBE were calculated using Monte Carlo codes for tritium in the cell, cell surface, cytoplasm, or cell nucleus. Penelope (PENetration and Energy LOss of Positrins and Electrons) code was used to calculate the geometry-dependent absorbed dose, lineal energy, and electron fluence spectrum. RBE for the intestinal crypt regeneration was calculated using a lineal energy-dependent biological weighting function. RBE for the induction of DNA double strand breaks was estimated using a nucleotide-level map for clustered DNA lesions of the Monte Carlo damage simulation (MCDS) code. For a typical cell of 10 μm radius and 5 μm nuclear radius, tritium in the cell nucleus resulted in much higher RBE-weighted absorbed dose than tritium distributed uniformly. Conversely, tritium distributed on the cell surface led to trivial RBE-weighted absorbed dose due to irradiation geometry and great attenuation of beta particles in the cytoplasm. For tritium uniformly distributed in the cell, the RBE-weighted absorbed dose was larger compared to tritium uniformly distributed in the tissue. Cellular- and micro-dosimetry models were developed for the assessment of heterogeneously distributed tritium.

  19. Content-Agnostic Malware Detection in Heterogeneous Malicious Distribution Graph

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-10-26

    Malware detection has been widely studied by analysing either file dropping relationships or characteristics of the file distribution network. This paper, for the first time, studies a global heterogeneous malware delivery graph fusing file dropping relationship and the topology of the file distribution network. The integration offers a unique ability of structuring the end-to-end distribution relationship. However, it brings large heterogeneous graphs to analysis. In our study, an average daily generated graph has more than 4 million edges and 2.7 million nodes that differ in type, such as IPs, URLs, and files. We propose a novel Bayesian label propagation model to unify the multi-source information, including content-agnostic features of different node types and topological information of the heterogeneous network. Our approach does not need to examine the source codes nor inspect the dynamic behaviours of a binary. Instead, it estimates the maliciousness of a given file through a semi-supervised label propagation procedure, which has a linear time complexity w.r.t. the number of nodes and edges. The evaluation on 567 million real-world download events validates that our proposed approach efficiently detects malware with a high accuracy. © 2016 Copyright held by the owner/author(s).

  20. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  1. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    Science.gov (United States)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  2. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  3. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  4. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  5. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  6. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  7. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  8. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  9. Effect of Heterogeneity of JSFR Fuel Assemblies to Power Distribution

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Shimazu, Yoichiro; Hibi, Koki; Fujimura, Koji

    2013-01-01

    Conclusion: 1) Strong heterogeneity of JSFR assemblies was successfully calculated by BACH. 2) Verification test of BACH: • Infinite assembly model; • Color set model; • Good agreement with Monte-Carlo results. 3) Core calculations 3 models for inner duct was used; inward model, outward model and homogeneous model. • k eff difference between the inward and out ward model → 0.3%Δk; • ~20% effect on flux and power distributions. Therefore, we have to pay careful attention for the location of inner duct in fuel loading of JSFR

  10. Management of Distributed and Extendible Heterogeneous Radio Architectures

    DEFF Research Database (Denmark)

    Ramkumar, Venkata; Mihovska, Albena D.; Prasad, Neeli R.

    2009-01-01

    Wireless communication systems are dynamic by nature, which comes from several factors, namely: radio propagation impairments, traffic changes, interference conditions, user mobility, etc. In a heterogeneous environment, , the dynamic network behavior calls for a dynamic management of the radio...... resources; a process that associates a large number of parameters and quality/performance indicators that need to be set, measured, analyzed, and optimized. Radio-over-fiber (RoF) technology involves the use of optical fiber links to distribute radio frequency (RF) signals from a central location to remote...

  11. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  12. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...

  13. Inventory calculations in sediment samples with heterogeneous plutonium activity distribution

    International Nuclear Information System (INIS)

    Eriksson, M.; Dahlgaard, H.

    2002-01-01

    A method to determine the total inventory of a heterogeneously distributed contamination of marine sediments is described. The study site is the Bylot Sound off the Thule Airbase, NW Greenland, where marine sediments became contaminated with plutonium in 1968 after a nuclear weapons accident. The calculation is based on a gamma spectrometric screening of the 241 Am concentration in 450 one-gram aliquots from 6 sediment cores. A Monte Carlo programme then simulates a probable distribution of the activity, and based on that, a total inventory is estimated by integrating a double exponential function. The present data indicate a total inventory around 3.5 kg, which is 7 times higher than earlier estimates (0.5 kg). The difference is partly explained by the inclusion of hot particles in the present calculation. A large uncertainty is connected to this estimate, and it should be regarded as preliminary. (au)

  14. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  15. ADAPTIVE DISTRIBUTION OF A SWARM OF HETEROGENEOUS ROBOTS

    Directory of Open Access Journals (Sweden)

    Amanda Prorok

    2016-02-01

    Full Text Available We present a method that distributes a swarm of heterogeneous robots among a set of tasks that require specialized capabilities in order to be completed. We model the system of heterogeneous robots as a community of species, where each species (robot type is defined by the traits (capabilities that it owns. Our method is based on a continuous abstraction of the swarm at a macroscopic level as we model robots switching between tasks. We formulate an optimization problem that produces an optimal set of transition rates for each species, so that the desired trait distribution is reached as quickly as possible. Since our method is based on the derivation of an analytical gradient, it is very efficient with respect to state-of-the-art methods. Building on this result, we propose a real-time optimization method that enables an online adaptation of transition rates. Our approach is well-suited for real-time applications that rely on online redistribution of large-scale robotic systems.

  16. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  17. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  18. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    Directory of Open Access Journals (Sweden)

    Berretty Robert-Paul

    2007-01-01

    Full Text Available Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone, requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation, we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML, enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  19. The Power of Heterogeneity: Parameter Relationships from Distributions

    Science.gov (United States)

    Röding, Magnus; Bradley, Siobhan J.; Williamson, Nathan H.; Dewi, Melissa R.; Nann, Thomas; Nydén, Magnus

    2016-01-01

    Complex scientific data is becoming the norm, many disciplines are growing immensely data-rich, and higher-dimensional measurements are performed to resolve complex relationships between parameters. Inherently multi-dimensional measurements can directly provide information on both the distributions of individual parameters and the relationships between them, such as in nuclear magnetic resonance and optical spectroscopy. However, when data originates from different measurements and comes in different forms, resolving parameter relationships is a matter of data analysis rather than experiment. We present a method for resolving relationships between parameters that are distributed individually and also correlated. In two case studies, we model the relationships between diameter and luminescence properties of quantum dots and the relationship between molecular weight and diffusion coefficient for polymers. Although it is expected that resolving complicated correlated relationships require inherently multi-dimensional measurements, our method constitutes a useful contribution to the modelling of quantitative relationships between correlated parameters and measurements. We emphasise the general applicability of the method in fields where heterogeneity and complex distributions of parameters are obstacles to scientific insight. PMID:27182701

  20. Mesoscale characterization of local property distributions in heterogeneous electrodes

    Science.gov (United States)

    Hsu, Tim; Epting, William K.; Mahbub, Rubayyat; Nuhfer, Noel T.; Bhattacharya, Sudip; Lei, Yinkai; Miller, Herbert M.; Ohodnicki, Paul R.; Gerdes, Kirk R.; Abernathy, Harry W.; Hackett, Gregory A.; Rollett, Anthony D.; De Graef, Marc; Litster, Shawn; Salvador, Paul A.

    2018-05-01

    The performance of electrochemical devices depends on the three-dimensional (3D) distributions of microstructural features in their electrodes. Several mature methods exist to characterize 3D microstructures over the microscale (tens of microns), which are useful in understanding homogeneous electrodes. However, methods that capture mesoscale (hundreds of microns) volumes at appropriate resolution (tens of nm) are lacking, though they are needed to understand more common, less ideal electrodes. Using serial sectioning with a Xe plasma focused ion beam combined with scanning electron microscopy (Xe PFIB-SEM), two commercial solid oxide fuel cell (SOFC) electrodes are reconstructed over volumes of 126 × 73 × 12.5 and 124 × 110 × 8 μm3 with a resolution on the order of ≈ 503 nm3. The mesoscale distributions of microscale structural features are quantified and both microscale and mesoscale inhomogeneities are found. We analyze the origin of inhomogeneity over different length scales by comparing experimental and synthetic microstructures, generated with different particle size distributions, with such synthetic microstructures capturing well the high-frequency heterogeneity. Effective medium theory models indicate that significant mesoscale variations in local electrochemical activity are expected throughout such electrodes. These methods offer improved understanding of the performance of complex electrodes in energy conversion devices.

  1. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Directory of Open Access Journals (Sweden)

    Seok-Hyoung Lee

    2012-06-01

    Full Text Available While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to achieve interoperability of the information and thus not easy to implement meaningful science technology information services through information convergence. This study aims to address the aforementioned issue by analyzing mapping systems between classification systems in order to design a structure to connect a variety of classification systems used in the academic information database of the Korea Institute of Science and Technology Information, which provides science and technology information portal service. This study also aims to design a mapping system for the classification systems to be applied to actual science and technology information services and information management systems.

  2. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  3. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  4. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  5. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  6. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  7. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  8. Income distribution patterns from a complete social security database

    Science.gov (United States)

    Derzsy, N.; Néda, Z.; Santos, M. A.

    2012-11-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.

  9. Histologic heterogeneity of triple negative breast cancer: A National Cancer Centre Database analysis.

    Science.gov (United States)

    Mills, Matthew N; Yang, George Q; Oliver, Daniel E; Liveringhouse, Casey L; Ahmed, Kamran A; Orman, Amber G; Laronga, Christine; Hoover, Susan J; Khakpour, Nazanin; Costa, Ricardo L B; Diaz, Roberto

    2018-06-02

    Triple negative breast cancer (TNBC) is an aggressive disease, but recent studies have identified heterogeneity in patient outcomes. However, the utility of histologic subtyping in TNBC has not yet been well-characterised. This study utilises data from the National Cancer Center Database (NCDB) to complete the largest series to date investigating the prognostic importance of histology within TNBC. A total of 729,920 patients (pts) with invasive ductal carcinoma (IDC), metaplastic breast carcinoma (MBC), medullary breast carcinoma (MedBC), adenoid cystic carcinoma (ACC), invasive lobular carcinoma (ILC) or apocrine breast carcinoma (ABC) treated between 2004 and 2012 were identified in the NCDB. Of these, 89,222 pts with TNBC that received surgery were analysed. Kaplan-Meier analysis, log-rank testing and multivariate Cox proportional hazards regression were utilised with overall survival (OS) as the primary outcome. MBC (74.1%), MedBC (60.6%), ACC (75.7%), ABC (50.1%) and ILC (1.8%) had significantly different proportions of triple negativity when compared to IDC (14.0%, p < 0.001). TNBC predicted an inferior OS in IDC (p < 0.001) and ILC (p < 0.001). Lumpectomy and radiation (RT) were more common in MedBC (51.7%) and ACC (51.5%) and less common in MBC (33.1%) and ILC (25.4%), when compared to IDC (42.5%, p < 0.001). TNBC patients with MBC (HR 1.39, p < 0.001), MedBC (HR 0.42, p < 0.001) and ACC (HR 0.32, p = 0.003) differed significantly in OS when compared to IDC. Our results indicate that histologic heterogeneity in TNBC significantly informs patient outcomes and thus, has the potential to aid in the development of optimum personalised treatments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Iov, Florin; Hägerling, Christian

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the qu......The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...

  11. Heterogeneous game resource distributions promote cooperation in spatial prisoner's dilemma game

    Science.gov (United States)

    Cui, Guang-Hai; Wang, Zhen; Yang, Yan-Cun; Tian, Sheng-Wen; Yue, Jun

    2018-01-01

    In social networks, individual abilities to establish interactions are always heterogeneous and independent of the number of topological neighbors. We here study the influence of heterogeneous distributions of abilities on the evolution of individual cooperation in the spatial prisoner's dilemma game. First, we introduced a prisoner's dilemma game, taking into account individual heterogeneous abilities to establish games, which are determined by the owned game resources. Second, we studied three types of game resource distributions that follow the power-law property. Simulation results show that the heterogeneous distribution of individual game resources can promote cooperation effectively, and the heterogeneous level of resource distributions has a positive influence on the maintenance of cooperation. Extensive analysis shows that cooperators with large resource capacities can foster cooperator clusters around themselves. Furthermore, when the temptation to defect is high, cooperator clusters in which the central pure cooperators have larger game resource capacities are more stable than other cooperator clusters.

  12. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  13. Distributed Service Discovery for Heterogeneous Wireless Sensor Networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Scholten, Johan; Havinga, Paul J.M.

    Service discovery in heterogeneous Wireless Sensor Networks is a challenging research objective, due to the inherent limitations of sensor nodes and their extensive and dense deployment. The protocols proposed for ad hoc networks are too heavy for sensor environments. This paper presents a

  14. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  15. Wide-area-distributed storage system for a multimedia database

    Science.gov (United States)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  16. Synchronous message-based communication for distributed heterogeneous systems

    International Nuclear Information System (INIS)

    Wilkinson, N.; Dohan, D.

    1992-01-01

    The use of a synchronous, message-based real-time operating system (Unison) as the basis of transparent interprocess and inter-processor communication over VME-bus is described. The implementation of a synchronous, message-based protocol for network communication between heterogeneous systems is discussed. In particular, the design and implementation of a message-based session layer over a virtual circuit transport layer protocol using UDP/IP is described. Inter-process communication is achieved via a message-based semantic which is portable by virtue of its ease of implementation in other operating system environments. Protocol performance for network communication among heterogeneous architecture is presented, including VMS, Unix, Mach and Unison. (author)

  17. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Schwefel, Hans-Peter; Silva, Nuno; Olsen, Rasmus Løvenstein

    2018-01-01

    Off-the shelf wireless communication technologies reduce infrastructure deployment costs and are thus attractive for distribution system control. Wireless communication however may lead to variable network performance. Hence the impact of this variability on overall distribution system control be...

  18. Electrical resistivity sounding to study water content distribution in heterogeneous soils

    Science.gov (United States)

    Electrical resistivity (ER) sounding is increasingly being used as non-invasive technique to reveal and map soil heterogeneity. The objective of this work was to assess ER sounding applicability to study soil water distribution in spatially heterogeneous soils. The 30x30-m study plot was located at ...

  19. Generative Adversarial Networks Based Heterogeneous Data Integration and Its Application for Intelligent Power Distribution and Utilization

    Directory of Open Access Journals (Sweden)

    Yuanpeng Tan

    2018-01-01

    Full Text Available Heterogeneous characteristics of a big data system for intelligent power distribution and utilization have already become more and more prominent, which brings new challenges for the traditional data analysis technologies and restricts the comprehensive management of distribution network assets. In order to solve the problem that heterogeneous data resources of power distribution systems are difficult to be effectively utilized, a novel generative adversarial networks (GANs based heterogeneous data integration method for intelligent power distribution and utilization is proposed. In the proposed method, GANs theory is introduced to expand the distribution of completed data samples. Then, a so-called peak clustering algorithm is proposed to realize the finite open coverage of the expanded sample space, and repair those incomplete samples to eliminate the heterogeneous characteristics. Finally, in order to realize the integration of the heterogeneous data for intelligent power distribution and utilization, the well-trained discriminator model of GANs is employed to check the restored data samples. The simulation experiments verified the validity and stability of the proposed heterogeneous data integration method, which provides a novel perspective for the further data quality management of power distribution systems.

  20. Computer simulation of the interplay between fractal structures and surrounding heterogeneous multifractal distributions. Applications

    OpenAIRE

    Martin Martin, Miguel Angel; Reyes Castro, Miguel E.; Taguas Coejo, Fco. Javier

    2014-01-01

    In a large number of physical, biological and environmental processes interfaces with high irregular geometry appear separating media (phases) in which the heterogeneity of constituents is present. In this work the quantification of the interplay between irregular structures and surrounding heterogeneous distributions in the plane is made For a geometric set image and a mass distribution (measure) image supported in image, being image, the mass image gives account of the interplay between th...

  1. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  2. Observed and unobserved heterogeneity in stochastic frontier models: An application to the electricity distribution industry

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2011-01-01

    In this study we combine different possibilities to model firm level heterogeneity in stochastic frontier analysis. We show that both observed and unobserved heterogeneities cause serious biases in inefficiency results. Modelling observed and unobserved heterogeneities treat individual firms in different ways and even though the expected mean inefficiency scores in both cases diminish the firm level efficiency rank orders turn out to be very different. The best fit with the data is obtained by modelling unobserved heterogeneity through randomizing frontier parameters and at the same time explicitly modelling the observed heterogeneity into the inefficiency distribution. These results are obtained by using data from Finnish electricity distribution utilities and the results are relevant in relation to electricity distribution pricing and regulation. -- Research Highlights: → We show that both observed and unobserved heterogeneities of firms cause biases in inefficiency results. → Different ways of accounting firm level heterogeneity end up with very different rank orders of firms. → The model which combines the characteristics of unobserved and observed heterogeneity fits the data best.

  3. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  4. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  5. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  6. Multi-layer distributed storage of LHD plasma diagnostic database

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Kojima, Mamoru; Ohsuna, Masaki; Nonomura, Miki; Imazu, Setsuo; Nagayama, Yoshio

    2006-01-01

    At the end of LHD experimental campaign in 2003, the amount of whole plasma diagnostics raw data had reached 3.16 GB in a long-pulse experiment. This is a new world record in fusion plasma experiments, far beyond the previous value of 1.5 GB/shot. The total size of the LHD diagnostic data is about 21.6 TB for the whole six years of experiments, and it continues to grow at an increasing rate. The LHD diagnostic database and storage system, i.e. the LABCOM system, has a completely distributed architecture to be sufficiently flexible and easily expandable to maintain integrity of the total amount of data. It has three categories of the storage layer: OODBMS volumes in data acquisition servers, RAID servers, and mass storage systems, such as MO jukeboxes and DVD-R changers. These are equally accessible through the network. By data migration between them, they can be considered a virtual OODB extension area. Their data contents have been listed in a 'facilitator' PostgreSQL RDBMS, which contains about 6.2 million entries, and informs the optimized priority to clients requesting data. Using the 'glib' compression for all of the binary data and applying the three-tier application model for the OODB data transfer/retrieval, an optimized OODB read-out rate of 1.7 MB/s and effective client access speed of 3-25 MB/s have been achieved. As a result, the LABCOM data system has succeeded in combination of the use of RDBMS, OODBMS, RAID, and MSS to enable a virtual and always expandable storage volume, simultaneously with rapid data access. (author)

  7. A Database for Decision-Making in Training and Distributed Learning Technology

    National Research Council Canada - National Science Library

    Stouffer, Virginia

    1998-01-01

    .... A framework for incorporating data about distributed learning courseware into the existing training database was devised and a plan for a national electronic courseware redistribution network was recommended...

  8. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  9. Heterogeneous distribution of prokaryotes and viruses at the microscale in a tidal sediment

    DEFF Research Database (Denmark)

    Carreira, Cátia; Larsen, Morten; Glud, Ronnie

    2013-01-01

    In this study we show for the first time the microscale (mm) 2- and 3-dimensional spatial distribution and abundance of prokaryotes, viruses, and oxygen in a tidal sediment. Prokaryotes and viruses were highly heterogeneously distributed with patches of elevated abundances surrounded by areas of ...

  10. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  11. Heterogeneous and Evolving Distributions of Pluto's Volatile Surface Ices

    Science.gov (United States)

    Grundy, William M.; Olkin, C. B.; Young, L. A.; Buie, M. W.; Young, E. F.

    2013-10-01

    We report observations of Pluto's 0.8 to 2.4 µm reflectance spectrum with IRTF/SpeX on 70 nights over the 13 years from 2001 to 2013. The spectra show numerous vibrational absorption features of simple molecules CH4, CO, and N2 condensed as ices on Pluto's surface. These absorptions are modulated by the planet's 6.39 day rotation period, enabling us to constrain the longitudinal distributions of the three ices. Absorptions of CO and N2 are concentrated on Pluto's anti-Charon hemisphere, unlike absorptions of less volatile CH4 ice that are offset by roughly 90° from the longitude of maximum CO and N2 absorption. In addition to the diurnal/longitudinal variations, the spectra show longer term trends. On decadal timescales, Pluto's stronger CH4 absorption bands have deepened, while the amplitude of their diurnal variation has diminished, consistent with additional CH4 absorption by high northern latitude regions rotating into view as the sub-Earth latitude moves north (as defined by the system's angular momentum vector). Unlike the CH4 absorptions, Pluto's CO and N2 absorptions are declining over time, suggesting more equatorial or southerly distributions of those species. The authors gratefully thank the staff of IRTF for their tremendous assistance over the dozen+ years of this project. The work was funded in part by NSF grants AST-0407214 and AST-0085614 and NASA grants NAG5-4210 and NAG5-12516.

  12. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  13. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  14. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  15. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  16. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    Science.gov (United States)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  17. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  18. Present and future status of distributed database for nuclear materials (Data-Free-Way)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Xu, Yibin; Kaji, Yoshiyuki; Tsukada, Takashi

    2004-01-01

    Data-Free-Way (DFW) is a distributed database for nuclear materials. DFW has been developed by three organizations such as National Institute for Materials Science (NIMS), Japan Atomic Energy Research Institute (JAERI) and Japan Nuclear Cycle Development Institute (JNC) since 1990. Each organization constructs each materials database in the strongest field and the member of three organizations can use these databases by internet. Construction of DFW, stored data, outline of knowledge data system, data manufacturing of knowledge note, activities of three organizations are described. On NIMS, nuclear reaction database for materials are explained. On JAERI, data analysis using IASCC data in JMPD is contained. Main database of JNC is experimental database of coexistence of engineering ceramics in liquid sodium at high temperature' and 'Tensile test database of irradiated 304 stainless steel' and 'Technical information database'. (S.Y.)

  19. Effects of Fiber Type and Size on the Heterogeneity of Oxygen Distribution in Exercising Skeletal Muscle

    Science.gov (United States)

    Liu, Gang; Mac Gabhann, Feilim; Popel, Aleksander S.

    2012-01-01

    The process of oxygen delivery from capillary to muscle fiber is essential for a tissue with variable oxygen demand, such as skeletal muscle. Oxygen distribution in exercising skeletal muscle is regulated by convective oxygen transport in the blood vessels, oxygen diffusion and consumption in the tissue. Spatial heterogeneities in oxygen supply, such as microvascular architecture and hemodynamic variables, had been observed experimentally and their marked effects on oxygen exchange had been confirmed using mathematical models. In this study, we investigate the effects of heterogeneities in oxygen demand on tissue oxygenation distribution using a multiscale oxygen transport model. Muscles are composed of different ratios of the various fiber types. Each fiber type has characteristic values of several parameters, including fiber size, oxygen consumption, myoglobin concentration, and oxygen diffusivity. Using experimentally measured parameters for different fiber types and applying them to the rat extensor digitorum longus muscle, we evaluated the effects of heterogeneous fiber size and fiber type properties on the oxygen distribution profile. Our simulation results suggest a marked increase in spatial heterogeneity of oxygen due to fiber size distribution in a mixed muscle. Our simulations also suggest that the combined effects of fiber type properties, except size, do not contribute significantly to the tissue oxygen spatial heterogeneity. However, the incorporation of the difference in oxygen consumption rates of different fiber types alone causes higher oxygen heterogeneity compared to control cases with uniform fiber properties. In contrast, incorporating variation in other fiber type-specific properties, such as myoglobin concentration, causes little change in spatial tissue oxygenation profiles. PMID:23028531

  20. A MODEL OF HETEROGENEOUS DISTRIBUTED SYSTEM FOR FOREIGN EXCHANGE PORTFOLIO ANALYSIS

    Directory of Open Access Journals (Sweden)

    Dragutin Kermek

    2006-06-01

    Full Text Available The paper investigates the design of heterogeneous distributed system for foreign exchange portfolio analysis. The proposed model includes few separated and dislocated but connected parts through distributed mechanisms. Making system distributed brings new perspectives to performance busting where software based load balancer gets very important role. Desired system should spread over multiple, heterogeneous platforms in order to fulfil open platform goal. Building such a model incorporates different patterns from GOF design patterns, business patterns, J2EE patterns, integration patterns, enterprise patterns, distributed design patterns to Web services patterns. The authors try to find as much as possible appropriate patterns for planned tasks in order to capture best modelling and programming practices.

  1. Heterogeneity phantoms for visualization of 3D dose distributions by MRI-based polymer gel dosimetry

    International Nuclear Information System (INIS)

    Watanabe, Yoichi; Mooij, Rob; Mark Perera, G.; Maryanski, Marek J.

    2004-01-01

    Heterogeneity corrections in dose calculations are necessary for radiation therapy treatment plans. Dosimetric measurements of the heterogeneity effects are hampered if the detectors are large and their radiological characteristics are not equivalent to water. Gel dosimetry can solve these problems. Furthermore, it provides three-dimensional (3D) dose distributions. We used a cylindrical phantom filled with BANG-3 registered polymer gel to measure 3D dose distributions in heterogeneous media. The phantom has a cavity, in which water-equivalent or bone-like solid blocks can be inserted. The irradiated phantom was scanned with an magnetic resonance imaging (MRI) scanner. Dose distributions were obtained by calibrating the polymer gel for a relationship between the absorbed dose and the spin-spin relaxation rate of the magnetic resistance (MR) signal. To study dose distributions we had to analyze MR imaging artifacts. This was done in three ways: comparison of a measured dose distribution in a simulated homogeneous phantom with a reference dose distribution, comparison of a sagittally scanned image with a sagittal image reconstructed from axially scanned data, and coregistration of MR and computed-tomography images. We found that the MRI artifacts cause a geometrical distortion of less than 2 mm and less than 10% change in the dose around solid inserts. With these limitations in mind we could make some qualitative measurements. Particularly we observed clear differences between the measured dose distributions around an air-gap and around bone-like material for a 6 MV photon beam. In conclusion, the gel dosimetry has the potential to qualitatively characterize the dose distributions near heterogeneities in 3D

  2. Pleurochrysome: A Web Database of Pleurochrysis Transcripts and Orthologs Among Heterogeneous Algae

    Science.gov (United States)

    Fujiwara, Shoko; Takatsuka, Yukiko; Hirokawa, Yasutaka; Tsuzuki, Mikio; Takano, Tomoyuki; Kobayashi, Masaaki; Suda, Kunihiro; Asamizu, Erika; Yokoyama, Koji; Shibata, Daisuke; Tabata, Satoshi; Yano, Kentaro

    2016-01-01

    Pleurochrysis is a coccolithophorid genus, which belongs to the Coccolithales in the Haptophyta. The genus has been used extensively for biological research, together with Emiliania in the Isochrysidales, to understand distinctive features between the two coccolithophorid-including orders. However, molecular biological research on Pleurochrysis such as elucidation of the molecular mechanism behind coccolith formation has not made great progress at least in part because of lack of comprehensive gene information. To provide such information to the research community, we built an open web database, the Pleurochrysome (http://bioinf.mind.meiji.ac.jp/phapt/), which currently stores 9,023 unique gene sequences (designated as UNIGENEs) assembled from expressed sequence tag sequences of P. haptonemofera as core information. The UNIGENEs were annotated with gene sequences sharing significant homology, conserved domains, Gene Ontology, KEGG Orthology, predicted subcellular localization, open reading frames and orthologous relationship with genes of 10 other algal species, a cyanobacterium and the yeast Saccharomyces cerevisiae. This sequence and annotation information can be easily accessed via several search functions. Besides fundamental functions such as BLAST and keyword searches, this database also offers search functions to explore orthologous genes in the 12 organisms and to seek novel genes. The Pleurochrysome will promote molecular biological and phylogenetic research on coccolithophorids and other haptophytes by helping scientists mine data from the primary transcriptome of P. haptonemofera. PMID:26746174

  3. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    Science.gov (United States)

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  5. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  6. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  7. Effect of heterogeneous microvasculature distribution on drug delivery to solid tumour

    International Nuclear Information System (INIS)

    Zhan, Wenbo; Xu, Xiao Yun; Gedroyc, Wladyslaw

    2014-01-01

    Most of the computational models of drug transport in vascular tumours assume a uniform distribution of blood vessels through which anti-cancer drugs are delivered. However, it is well known that solid tumours are characterized by dilated microvasculature with non-uniform diameters and irregular branching patterns. In this study, the effect of heterogeneous vasculature on drug transport and uptake is investigated by means of mathematical modelling of the key physical and biochemical processes in drug delivery. An anatomically realistic tumour model accounting for heterogeneous distribution of blood vessels is reconstructed based on magnetic resonance images of a liver tumour. Numerical simulations are performed for different drug delivery modes, including direct continuous infusion and thermosensitive liposome-mediated delivery, and the anti-cancer effectiveness is evaluated through changes in tumour cell density based on predicted intracellular concentrations. Comparisons are made between regions of different vascular density, and between the two drug delivery modes. Our numerical results show that both extra- and intra-cellular concentrations in the liver tumour are non-uniform owing to the heterogeneous distribution of tumour vasculature. Drugs accumulate faster in well-vascularized regions, where they are also cleared out more quickly, resulting in less effective tumour cell killing in these regions. Compared with direct continuous infusion, the influence of heterogeneous vasculature on anti-cancer effectiveness is more pronounced for thermosensitive liposome-mediated delivery. (paper)

  8. The response time distribution in a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1996-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  9. P-wave scattering and the distribution of heterogeneity around Etna volcano

    Directory of Open Access Journals (Sweden)

    Toni Zieger

    2016-09-01

    Full Text Available Volcanoes and fault zones are areas of increased heterogeneity in the Earth crust that leads to strong scattering of seismic waves. For the understanding of the volcanic structure and the role of attenuation and scattering processes it is important to investigate the distribution of heterogeneity. We used the signals of air-gun shots to investigate the distribution of heterogeneity around Mount Etna. We devise a new methodology that is based on the coda energy ratio which we define as the ratio between the energy of the direct P-wave and the energy in a later coda window. This is based on the basic assumption that scattering caused by heterogeneity removes energy from the direct P-waves. We show that measurements of the energy ratio are stable with respect to changes of the details of the time windows definitions. As an independent proxy of the scattering strength along the ray path we measure the peak delay time of the direct P-wave. The peak delay time is well correlated with the coda energy ratio. We project the observation in the directions of the incident rays at the stations. Most notably is an area with increased wave scattering in the volcano and east of it. The strong heterogeneity found supports earlier observations and confirms the possibility to use P-wave sources for the determination of scattering properties. We interpret the extension of the highly heterogeneous zone towards the east as a potential signature of inelastic deformation processes induced by the eastward sliding of flank of the volcano.

  10. Calculation of breaking radiation dose fields in heterogenous media by a method of the transformation of axial distribution

    International Nuclear Information System (INIS)

    Mil'shtejn, R.S.

    1988-01-01

    Analysis of dose fields in a heterogeneous tissue equivalent medium has shown that dose distributions have radial symmetry and can be described by a curve of axial distribution with renormalization of maximum ionization depth. A method of the calculation of a dose field in a heterogeneous medium using the principle of radial symmetry is presented

  11. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  12. Heterogeneity of D-Serine Distribution in the Human Central Nervous System

    Science.gov (United States)

    Suzuki, Masataka; Imanishi, Nobuaki; Mita, Masashi; Hamase, Kenji; Aiso, Sadakazu

    2017-01-01

    D-serine is an endogenous ligand for N-methyl-D-aspartate glutamate receptors. Accumulating evidence including genetic associations of D-serine metabolism with neurological or psychiatric diseases suggest that D-serine is crucial in human neurophysiology. However, distribution and regulation of D-serine in humans are not well understood. Here, we found that D-serine is heterogeneously distributed in the human central nervous system (CNS). The cerebrum contains the highest level of D-serine among the areas in the CNS. There is heterogeneity in its distribution in the cerebrum and even within the cerebral neocortex. The neocortical heterogeneity is associated with Brodmann or functional areas but is unrelated to basic patterns of cortical layer structure or regional expressional variation of metabolic enzymes for D-serine. Such D-serine distribution may reflect functional diversity of glutamatergic neurons in the human CNS, which may serve as a basis for clinical and pharmacological studies on D-serine modulation. PMID:28604057

  13. Measurement of heterogeneous distribution on technegas SPECT images by three-dimensional fractal analysis

    International Nuclear Information System (INIS)

    Nagao, Michinobu; Murase, Kenya

    2002-01-01

    This review article describes a method for quantifying heterogeneous distribution on Technegas ( 99m Tc-carbon particle radioaerosol) SPECT images by three-dimensional fractal analysis (3D-FA). Technegas SPECT was performed to quantify the severity of pulmonary emphysema. We delineated the SPECT images by using five cut-offs (15, 20, 25, 30 and 35% of the maximal voxel radioactivity), and measured the total number of voxels in the areas surrounded by the contours obtained with each cut-off level. We calculated fractal dimensions from the relationship between the total number of voxels and the cut-off levels transformed into natural logarithms. The fractal dimension derived from 3D-FA is the relative and objective measurement, which can assess the heterogeneous distribution on Technegas SPECT images. The fractal dimension strongly correlate pulmonary function in patients with emphysema and well documented the overall and regional severity of emphysema. (author)

  14. Breast dose in mammography is about 30% lower when realistic heterogeneous glandular distributions are considered

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, Andrew M., E-mail: amhern@ucdavis.edu [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Seibert, J. Anthony; Boone, John M. [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)

    2015-11-15

    Purpose: Current dosimetry methods in mammography assume that the breast is comprised of a homogeneous mixture of glandular and adipose tissues. Three-dimensional (3D) dedicated breast CT (bCT) data sets were used previously to assess the complex anatomical structure within the breast, characterizing the statistical distribution of glandular tissue in the breast. The purpose of this work was to investigate the effect of bCT-derived heterogeneous glandular distributions on dosimetry in mammography. Methods: bCT-derived breast diameters, volumes, and 3D fibroglandular distributions were used to design realistic compressed breast models comprised of heterogeneous distributions of glandular tissue. The bCT-derived glandular distributions were fit to biGaussian functions and used as probability density maps to assign the density distributions within compressed breast models. The MCNPX 2.6.0 Monte Carlo code was used to estimate monoenergetic normalized mean glandular dose “DgN(E)” values in mammography geometry. The DgN(E) values were then weighted by typical mammography x-ray spectra to determine polyenergetic DgN (pDgN) coefficients for heterogeneous (pDgN{sub hetero}) and homogeneous (pDgN{sub homo}) cases. The dependence of estimated pDgN values on phantom size, volumetric glandular fraction (VGF), x-ray technique factors, and location of the heterogeneous glandular distributions was investigated. Results: The pDgN{sub hetero} coefficients were on average 35.3% (SD, 4.1) and 24.2% (SD, 3.0) lower than the pDgN{sub homo} coefficients for the Mo–Mo and W–Rh x-ray spectra, respectively, across all phantom sizes and VGFs when the glandular distributions were centered within the breast phantom in the coronal plane. At constant breast size, increasing VGF from 7.3% to 19.1% lead to a reduction in pDgN{sub hetero} relative to pDgN{sub homo} of 23.6%–27.4% for a W–Rh spectrum. Displacement of the glandular distribution, at a distance equal to 10% of the

  15. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  16. Distributed Circumnavigation Control with Dynamic Spacings for a Heterogeneous Multi-robot System

    OpenAIRE

    Yao, Weijia; Luo, Sha; Lu, Huimin; Xiao, Junhao

    2018-01-01

    Circumnavigation control is useful in real-world applications such as entrapping a hostile target. In this paper, we consider a heterogeneous multi-robot system where robots have different physical properties, such as maximum movement speeds. Instead of equal-spacings, dynamic spacings according to robots' properties, which are termed utilities in this paper, will be more desirable in a scenario such as target entrapment. A distributed circumnavigation control algorithm based on utilities is ...

  17. Heterogeneous distribution of water in the mantle transition zone beneath United States inferred from seismic observations

    Science.gov (United States)

    Wang, Y.; Pavlis, G. L.; Li, M.

    2017-12-01

    The amount of water in the Earth's deep mantle is critical for the evolution of the solid Earth and the atmosphere. Mineral physics studies have revealed that Wadsleyite and Ringwoodite in the mantle transition zone could store several times the volume of water in the ocean. However, the water content and its distribution in the transition zone remain enigmatic due to lack of direct observations. Here we use seismic data from the full deployment of the Earthscope Transportable Array to produce 3D image of P to S scattering of the mantle transition zone beneath the United States. We compute the image volume from 141,080 pairs of high quality receiver functions defined by the Earthscope Automated Receiver Survey, reprocessed by the generalized iterative deconvolution method and imaged by the plane wave migration method. We find that the transition zone is filled with previously unrecognized small-scale heterogeneities that produce pervasive, negative polarity P to S conversions. Seismic synthetic modeling using a point source simulation method suggests two possible structures for these objects: 1) a set of randomly distributed blobs of slight difference in size, and 2) near vertical diapir structures from small scale convections. Combining with geodynamic simulations, we interpret the observation as compositional heterogeneity from small-scale, low-velocity bodies that are water enriched. Our results indicate there is a heterogeneous distribution of water through the entire mantle transition zone beneath the contiguous United States.

  18. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  19. CT Identification and Fractal Characterization of 3-D Propagation and Distribution of Hydrofracturing Cracks in Low-Permeability Heterogeneous Rocks

    Science.gov (United States)

    Liu, Peng; Ju, Yang; Gao, Feng; Ranjith, Pathegama G.; Zhang, Qianbing

    2018-03-01

    Understanding and characterization of the three-dimensional (3-D) propagation and distribution of hydrofracturing cracks in heterogeneous rock are key for enhancing the stimulation of low-permeability petroleum reservoirs. In this study, we investigated the propagation and distribution characteristics of hydrofracturing cracks, by conducting true triaxial hydrofracturing tests and computed tomography on artificial heterogeneous rock specimens. Silica sand, Portland cement, and aedelforsite were mixed to create artificial heterogeneous rock specimens using the data of mineral compositions, coarse gravel distribution, and mechanical properties that were measured from the natural heterogeneous glutenite cores. To probe the effects of material heterogeneity on hydrofracturing cracks, the artificial homogenous specimens were created using the identical matrix compositions of the heterogeneous rock specimens and then fractured for comparison. The effects of horizontal geostress ratio on the 3-D growth and distribution of cracks during hydrofracturing were examined. A fractal-based method was proposed to characterize the complexity of fractures and the efficiency of hydrofracturing stimulation of heterogeneous media. The material heterogeneity and horizontal geostress ratio were found to significantly influence the 3-D morphology, growth, and distribution of hydrofracturing cracks. A horizontal geostress ratio of 1.7 appears to be the upper limit for the occurrence of multiple cracks, and higher ratios cause a single crack perpendicular to the minimum horizontal geostress component. The fracturing efficiency is associated with not only the fractured volume but also the complexity of the crack network.

  20. Inference of R 0 and Transmission Heterogeneity from the Size Distribution of Stuttering Chains

    Science.gov (United States)

    Blumberg, Seth; Lloyd-Smith, James O.

    2013-01-01

    For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer , but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, ) has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in is detectable. In addition, by allowing for superspreading events, inference of shifts the threshold above which a transmission chain should be considered anomalously large for a given value of (thus reducing the probability of false alarms about pathogen adaptation). Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results. PMID:23658504

  1. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon; Burhan, Muhammad; Shahzad, Muhammad Wakil; Ismail, Azahar Bin

    2017-01-01

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  2. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon

    2017-08-31

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  3. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  4. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  5. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  6. A Secure Scheme for Distributed Consensus Estimation against Data Falsification in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shichao Mi

    2016-02-01

    Full Text Available Heterogeneous wireless sensor networks (HWSNs can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs have more computation power, while relay nodes (RNs with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.

  7. Effects of heterogeneous wealth distribution on public cooperation with collective risk

    Science.gov (United States)

    Wang, Jing; Fu, Feng; Wang, Long

    2010-07-01

    The distribution of wealth among individuals in real society can be well described by the Pareto principle or “80-20 rule.” How does such heterogeneity in initial wealth distribution affect the emergence of public cooperation, when individuals, the rich and the poor, engage in a collective-risk enterprise, not to gain a profit but to avoid a potential loss? Here we address this issue by studying a simple but effective model based on threshold public goods games. We analyze the evolutionary dynamics for two distinct scenarios, respectively: one with fair sharers versus defectors and the other with altruists versus defectors. For both scenarios, particularly, we in detail study the dynamics of the population with dichotomic initial wealth—the rich versus the poor. Moreover, we demonstrate the possible steady compositions of the population and provide the conditions for stability of these steady states. We prove that in a population with heterogeneous wealth distribution, richer individuals are more likely to cooperate than poorer ones. Participants with lower initial wealth may choose to cooperate only if all players richer than them are cooperators. The emergence of pubic cooperation largely relies on rich individuals. Furthermore, whenever the wealth gap between the rich and the poor is sufficiently large, cooperation of a few rich individuals can substantially elevate the overall level of social cooperation, which is in line with the well-known Pareto principle. Our work may offer an insight into the emergence of cooperative behavior in real social situations where heterogeneous distribution of wealth among individual is omnipresent.

  8. Effects of heterogeneous wealth distribution on public cooperation with collective risk.

    Science.gov (United States)

    Wang, Jing; Fu, Feng; Wang, Long

    2010-07-01

    The distribution of wealth among individuals in real society can be well described by the Pareto principle or "80-20 rule." How does such heterogeneity in initial wealth distribution affect the emergence of public cooperation, when individuals, the rich and the poor, engage in a collective-risk enterprise, not to gain a profit but to avoid a potential loss? Here we address this issue by studying a simple but effective model based on threshold public goods games. We analyze the evolutionary dynamics for two distinct scenarios, respectively: one with fair sharers versus defectors and the other with altruists versus defectors. For both scenarios, particularly, we in detail study the dynamics of the population with dichotomic initial wealth-the rich versus the poor. Moreover, we demonstrate the possible steady compositions of the population and provide the conditions for stability of these steady states. We prove that in a population with heterogeneous wealth distribution, richer individuals are more likely to cooperate than poorer ones. Participants with lower initial wealth may choose to cooperate only if all players richer than them are cooperators. The emergence of pubic cooperation largely relies on rich individuals. Furthermore, whenever the wealth gap between the rich and the poor is sufficiently large, cooperation of a few rich individuals can substantially elevate the overall level of social cooperation, which is in line with the well-known Pareto principle. Our work may offer an insight into the emergence of cooperative behavior in real social situations where heterogeneous distribution of wealth among individual is omnipresent.

  9. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  10. A Context-Aware Adaptive Streaming Media Distribution System in a Heterogeneous Network with Multiple Terminals

    Directory of Open Access Journals (Sweden)

    Yepeng Ni

    2016-01-01

    Full Text Available We consider the problem of streaming media transmission in a heterogeneous network from a multisource server to home multiple terminals. In wired network, the transmission performance is limited by network state (e.g., the bandwidth variation, jitter, and packet loss. In wireless network, the multiple user terminals can cause bandwidth competition. Thus, the streaming media distribution in a heterogeneous network becomes a severe challenge which is critical for QoS guarantee. In this paper, we propose a context-aware adaptive streaming media distribution system (CAASS, which implements the context-aware module to perceive the environment parameters and use the strategy analysis (SA module to deduce the most suitable service level. This approach is able to improve the video quality for guarantying streaming QoS. We formulate the optimization problem of QoS relationship with the environment parameters based on the QoS testing algorithm for IPTV in ITU-T G.1070. We evaluate the performance of the proposed CAASS through 12 types of experimental environments using a prototype system. Experimental results show that CAASS can dynamically adjust the service level according to the environment variation (e.g., network state and terminal performances and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR.

  11. Distributed Input and State Estimation Using Local Information in Heterogeneous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dzung Tran

    2017-07-01

    Full Text Available A new distributed input and state estimation architecture is introduced and analyzed for heterogeneous sensor networks. Specifically, nodes of a given sensor network are allowed to have heterogeneous information roles in the sense that a subset of nodes can be active (that is, subject to observations of a process of interest and the rest can be passive (that is, subject to no observation. Both fixed and varying active and passive roles of sensor nodes in the network are investigated. In addition, these nodes are allowed to have non-identical sensor modalities under the common underlying assumption that they have complimentary properties distributed over the sensor network to achieve collective observability. The key feature of our framework is that it utilizes local information not only during the execution of the proposed distributed input and state estimation architecture but also in its design in that global uniform ultimate boundedness of error dynamics is guaranteed once each node satisfies given local stability conditions independent from the graph topology and neighboring information of these nodes. As a special case (e.g., when all nodes are active and a positive real condition is satisfied, the asymptotic stability can be achieved with our algorithm. Several illustrative numerical examples are further provided to demonstrate the efficacy of the proposed architecture.

  12. Quartile and Outlier Detection on Heterogeneous Clusters Using Distributed Radix Sort

    International Nuclear Information System (INIS)

    Meredith, Jeremy S.; Vetter, Jeffrey S.

    2011-01-01

    In the past few years, performance improvements in CPUs and memory technologies have outpaced those of storage systems. When extrapolated to the exascale, this trend places strict limits on the amount of data that can be written to disk for full analysis, resulting in an increased reliance on characterizing in-memory data. Many of these characterizations are simple, but require sorted data. This paper explores an example of this type of characterization - the identification of quartiles and statistical outliers - and presents a performance analysis of a distributed heterogeneous radix sort as well as an assessment of current architectural bottlenecks.

  13. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  14. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    Science.gov (United States)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  15. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  16. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  17. Long-term spatial heterogeneity in mallard distribution in the Prairie pothole region

    Science.gov (United States)

    Janke, Adam K.; Anteau, Michael J.; Stafford, Joshua D.

    2017-01-01

    The Prairie Pothole Region (PPR) of north-central United States and south-central Canada supports greater than half of all breeding mallards (Anas platyrhynchos) annually counted in North America and is the focus of widespread conservation and research efforts. Allocation of conservation resources for this socioeconomically important population would benefit from an understanding of the nature of spatiotemporal variation in distribution of breeding mallards throughout the 850,000 km2 landscape. We used mallard counts from the Waterfowl Breeding Population and Habitat Survey to test for spatial heterogeneity and identify high- and low-abundance regions of breeding mallards over a 50-year time series. We found strong annual spatial heterogeneity in all years: 90% of mallards counted annually were on an average of only 15% of surveyed segments. Using a local indicator of spatial autocorrelation, we found a relatively static distribution of low-count clusters in northern Montana, USA, and southern Alberta, Canada, and a dynamic distribution of high-count clusters throughout the study period. Distribution of high-count clusters shifted southeast from northwestern portions of the PPR in Alberta and western Saskatchewan, Canada, to North and South Dakota, USA, during the latter half of the study period. This spatial redistribution of core mallard breeding populations was likely driven by interactions between environmental variation that created favorable hydrological conditions for wetlands in the eastern PPR and dynamic land-use patterns related to upland cropping practices and government land-retirement programs. Our results highlight an opportunity for prioritizing relatively small regions within the PPR for allocation of wetland and grassland conservation for mallard populations. However, the extensive spatial heterogeneity in core distributions over our study period suggests such spatial prioritization will have to overcome challenges presented by dynamic land

  18. A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments

    Directory of Open Access Journals (Sweden)

    Shen Shen

    2018-02-01

    Full Text Available Due to their strong immersion and real-time interactivity, helmet-mounted virtual reality (VR devices are becoming increasingly popular. Based on these devices, an immersive virtual geographic environment (VGE provides a promising method for research into crowd behavior in an emergency. However, the current cheaper helmet-mounted VR devices are not popular enough, and will continue to coexist with personal computer (PC-based systems for a long time. Therefore, a heterogeneous distributed virtual geographic environment (HDVGE could be a feasible solution to the heterogeneous problems caused by various types of clients, and support the implementation of spatiotemporal crowd behavior experiments with large numbers of concurrent participants. In this study, we developed an HDVGE framework, and put forward a set of design principles to define the similarities between the real world and the VGE. We discussed the HDVGE architecture, and proposed an abstract interaction layer, a protocol-based interaction algorithm, and an adjusted dead reckoning algorithm to solve the heterogeneous distributed problems. We then implemented an HDVGE prototype system focusing on subway fire evacuation experiments. Two types of clients are considered in the system: PC, and all-in-one VR. Finally, we evaluated the performances of the prototype system and the key algorithms. The results showed that in a low-latency local area network (LAN environment, the prototype system can smoothly support 90 concurrent users consisting of PC and all-in-one VR clients. HDVGE provides a feasible solution for studying not only spatiotemporal crowd behaviors in normal conditions, but also evacuation behaviors in emergency conditions such as fires and earthquakes. HDVGE could also serve as a new means of obtaining observational data about individual and group behavior in support of human geography research.

  19. Inference of R(0 and transmission heterogeneity from the size distribution of stuttering chains.

    Directory of Open Access Journals (Sweden)

    Seth Blumberg

    Full Text Available For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, [Formula: see text] and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring [Formula: see text] and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer [Formula: see text], but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, [Formula: see text] has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both [Formula: see text] and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for [Formula: see text] is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in [Formula: see text] is detectable. In addition, by allowing for superspreading events, inference of [Formula: see text] shifts the threshold above which a transmission chain should be considered anomalously large for a given value of [Formula: see text] (thus reducing the probability of false alarms about pathogen adaptation. Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results.

  20. Long-lived CO/sub 2/ lasers with distributed heterogeneous catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Browne, P G; Smith, A L.S.

    1974-12-11

    In a sealed CO/sub 2/-N/sub 2/-He system with a clean discharge tube the degree of dissociation of the CO/sub 2/ is greater than 80 percent (with no hydrogen present), and laser action cannot be obtained. If Pt is distributed along the discharge tube walls as a discontinuous film it catalyses back-reactions reforming CO/sub 2/. The degree of dissociation is then less than 40 percent, and efficient laser action at 10.6 ..mu.. is obtained. Using such distributed heterogeneous catalysis, a CO/sub 2/-N/sub 2/-He-Xe laser has operated for more than 3000 h. In this system, both H/sub 2/ and D/sub 2/ are undesirable additives because they decrease the excitation rate of the upper laser level. (auth)

  1. Distributed memory in a heterogeneous network, as used in the CERN-PS complex timing system

    CERN Document Server

    Kovaltsov, V I

    1995-01-01

    The Distributed Table Manager (DTM) is a fast and efficient utility for distributing named binary data structures called Tables, of arbitrary size and structure, around a heterogeneous network of computers to a set of registered clients. The Tables are transmitted over a UDP network between DTM servers in network format, where the servers perform the conversions to and from host format for local clients. The servers provide clients with synchronization mechanisms, a choice of network data flows, and table options such as keeping table disc copies, shared memory or heap memory table allocation, table read/write permissions, and table subnet broadcasting. DTM has been designed to be easily maintainable, and to automatically recover from the type of errors typically encountered in a large control system network. The DTM system is based on a three level server daemon hierarchy, in which an inter daemon protocol handles network failures, and incorporates recovery procedures which will guarantee table consistency w...

  2. Information system architecture to support transparent access to distributed, heterogeneous data sources

    International Nuclear Information System (INIS)

    Brown, J.C.

    1994-08-01

    Quality situation assessment and decision making require access to multiple sources of data and information. Insufficient accessibility to data exists for many large corporations and Government agencies. By utilizing current advances in computer technology, today's situation analyst's have a wealth of information at their disposal. There are many potential solutions to the information accessibility problem using today's technology. The United States Department of Energy (US-DOE) faced this problem when dealing with one class of problem in the US. The result of their efforts has been the creation of the Tank Waste Information Network System -- TWINS. The TWINS solution combines many technologies to address problems in several areas such as User Interfaces, Transparent Access to Multiple Data Sources, and Integrated Data Access. Data related to the complex is currently distributed throughout several US-DOE installations. Over time, each installation has adopted their own set of standards as related to information management. Heterogeneous hardware and software platforms exist both across the complex and within a single installation. Standards for information management vary between US-DOE mission areas within installations. These factors contribute to the complexity of accessing information in a manner that enhances the performance and decision making process of the analysts. This paper presents one approach taken by the DOE to resolve the problem of distributed, heterogeneous, multi-media information management for the HLW Tank complex. The information system architecture developed for the DOE by the TWINS effort is one that is adaptable to other problem domains and uses

  3. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations.

    Science.gov (United States)

    Khalifa, Tarek; Abdrabou, Atef; Shaban, Khaled; Gaouda, A M

    2018-05-11

    Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.

  4. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  5. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations

    Directory of Open Access Journals (Sweden)

    Tarek Khalifa

    2018-05-01

    Full Text Available Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.

  6. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  7. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  8. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  9. Site initialization, recovery, and back-up in a distributed database system

    International Nuclear Information System (INIS)

    Attar, R.; Bernstein, P.A.; Goodman, N.

    1982-01-01

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all inter-site synchronization

  10. Fine-Scale Spatial Heterogeneity in the Distribution of Waterborne Protozoa in a Drinking Water Reservoir.

    Science.gov (United States)

    Burnet, Jean-Baptiste; Ogorzaly, Leslie; Penny, Christian; Cauchie, Henry-Michel

    2015-09-23

    The occurrence of faecal pathogens in drinking water resources constitutes a threat to the supply of safe drinking water, even in industrialized nations. To efficiently assess and monitor the risk posed by these pathogens, sampling deserves careful design, based on preliminary knowledge on their distribution dynamics in water. For the protozoan pathogens Cryptosporidium and Giardia, only little is known about their spatial distribution within drinking water supplies, especially at fine scale. Two-dimensional distribution maps were generated by sampling cross-sections at meter resolution in two different zones of a drinking water reservoir. Samples were analysed for protozoan pathogens as well as for E. coli, turbidity and physico-chemical parameters. Parasites displayed heterogeneous distribution patterns, as reflected by significant (oo)cyst density gradients along reservoir depth. Spatial correlations between parasites and E. coli were observed near the reservoir inlet but were absent in the downstream lacustrine zone. Measurements of surface and subsurface flow velocities suggest a role of local hydrodynamics on these spatial patterns. This fine-scale spatial study emphasizes the importance of sampling design (site, depth and position on the reservoir) for the acquisition of representative parasite data and for optimization of microbial risk assessment and monitoring. Such spatial information should prove useful to the modelling of pathogen transport dynamics in drinking water supplies.

  11. Addressing population heterogeneity and distribution in epidemics models using a cellular automata approach.

    Science.gov (United States)

    López, Leonardo; Burguerner, Germán; Giovanini, Leonardo

    2014-04-12

    The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the

  12. Application of new type of distributed multimedia databases to networked electronic museum

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  13. Predicting plant distribution in an heterogeneous Alpine landscape: does soil matter?

    Science.gov (United States)

    Buri, Aline; Cianfrani, Carmen; Pradervand, Jean-Nicolas; Guisan, Antoine

    2016-04-01

    Topographic and climatic factors are usually used to predict plant distribution because they are known to explain their presence or absence. Soil properties have been widely shown to influence plant growth and distributions. However, they are rarely taken into account as predictors of plant species distribution models (SDM) in an edaphically heterogeneous landscape. Or, when it happens, interpolation techniques are used to project soil factors in space. In heterogeneous landscape, such as in the Alps region, where soil properties change abruptly as a function of environmental conditions over short distances, interpolation techniques require a huge quantities of samples to be efficient. This is costly and time consuming, and bring more errors than predictive approach for an equivalent number of samples. In this study we aimed to assess whether soil proprieties may be generalized over entire mountainous geographic extents and can improve predictions of plant distributions over traditional topo-climatic predictors. First, we used a predictive approach to map two soil proprieties based on field measurements in the western Swiss Alps region; the soil pH and the ratio of stable isotopes 13C/12C (called δ13CSOM). We used ensemble forecasting techniques combining together several predictive algorithms to build models of the geographic variation in the values of both soil proprieties and projected them in the entire study area. As predictive factors, we employed very high resolution topo-climatic data. In a second step, output maps from the previous task were used as an input for vegetation regional models. We integrated the predicted soil proprieties to a set of basic topo-climatic predictors known to be important to model plants species. Then we modelled the distribution of 156 plant species inhabiting the study area. Finally, we compared the quality of the models having or not soil proprieties as predictors to evaluate their effect on the predictive power of our models

  14. Nuclear analysis of the Chornobyl fuel containing masses with heterogeneous fuel distribution

    International Nuclear Information System (INIS)

    Turski, R. B.

    1998-01-01

    Although significant data has been obtained on the condition and composition of the fuel containing masses (FCM) located in the concrete chambers under the Chernobyl Unit 4 reactor cavity, there is still uncertainty regarding the possible recriticality of this material. The high radiation levels make access extremely difficult, and most of the samples are from the FCM surface regions. There is little information on the interior regions of the FCM, and one cannot assume with confidence that the surface measurements are representative of the interior regions. Therefore, reasonable assumptions on the key parameters such as fuel concentration, the concentrations of impurities and neutron poisons (especially boron), the void fraction of the FCM due to its known porosity, and the degrees of fuel heterogeneity, are necessary to evaluate the possibility of recriticality. The void fraction is important since it introduces the possibility of water moderator being distributed throughout the FCM. Calculations indicate that the addition of 10 to 30 volume percent (v/o) water to the FCM has a significant impact on the calculated reactivity of the FCM. Therefore, water addition must be considered carefully. The other possible moderators are graphite and silicone dioxide. As discussed later in this paper, silicone dioxide moderation does not represent a criticality threat. For graphite, both heterogeneous fuel arrangements and very large volume fractions of graphite are necessary for a graphite moderated system to go critical. Based on the observations and measurements of the FCM compositions, these conditions do not appear creditable for the Chernobyl FCM. Therefore, the focus of the analysis reported in this paper will be on reasonable heterogeneous fuel arrangements and water moderation. The analysis will evaluate a range of fuel and diluent compositions

  15. Effects of species biological traits and environmental heterogeneity on simulated tree species distribution shifts under climate change.

    Science.gov (United States)

    Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S

    2018-09-01

    Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Efficiency effects of observed and unobserved heterogeneity: Evidence from Norwegian electricity distribution networks

    International Nuclear Information System (INIS)

    Growitsch, Christian; Jamasb, Tooraj; Wetzel, Heike

    2012-01-01

    Since the 1990s, efficiency and benchmarking analysis has increasingly been used in network utilities research and regulation. A recurrent concern is the effect of observable environmental factors that are beyond the influence of firms and unobserved factors that are not identifiable on measured cost and quality performance of firms. This paper analyses the effect of observed geographic and weather factors and unobserved heterogeneity on a set of 128 Norwegian electricity distribution utilities for the 2001–2004 period. We utilise data on 78 geographic and weather variables to identify real economic inefficiency while controlling for observed and unobserved heterogeneity. We use the Factor Analysis technique to reduce the number of environmental factors into few composite variables and to avoid the problem of multicollinearity. In order to identify firm-specific inefficiency, we then estimate a pooled version of the established stochastic frontier model of Aigner et al. (1977) and the recent true random effects model of Greene (2004; 2005a,b) without and with environmental variables. The results indicate that the observed environmental factors have a rather limited influence on the utilities' average efficiency and the efficiency rankings. Moreover, the difference between the average efficiency scores and the efficiency rankings among the pooled and the true random effects models imply that the type of SFA model used is highly influencing the efficiency estimates.

  17. From inter-specific behavioural interactions to species distribution patterns along gradients of habitat heterogeneity.

    Science.gov (United States)

    Laiolo, Paola

    2013-01-01

    The strength of the behavioural processes associated with competitor coexistence may vary when different physical environments, and their biotic communities, come into contact, although empirical evidence of how interference varies across gradients of environmental complexity is still scarce in vertebrates. Here, I analyse how behavioural interactions and habitat selection regulate the local distribution of steppeland larks (Alaudidae) in a gradient from simple to heterogeneous agricultural landscapes in Spain, using crested lark Galerida cristata and Thekla lark G. theklae as study models. Galerida larks significantly partitioned by habitat but frequently co-occurred in heterogeneous environments. Irrespective of habitat divergence, however, the local densities of the two larks were negatively correlated, and the mechanisms beyond this pattern were investigated by means of playback experiments. When simulating the intrusion of the congener by broadcasting the species territorial calls, both larks responded with an aggressive response as intense with respect to warning and approach behaviour as when responding to the intrusion of a conspecific. However, birds promptly responded to playbacks only when congener territories were nearby, a phenomenon that points to learning as the mechanisms through which individuals finely tune their aggressive responses to the local competition levels. Heterospecifics occurred in closer proximity in diverse agro-ecosystems, possibly because of more abundant or diverse resources, and here engage in antagonistic interactions. The drop of species diversity associated with agricultural homogenisation is therefore likely to also bring about the disappearance of the behavioural repertoires associated with species interactions.

  18. Elucidating the impact of micro-scale heterogeneous bacterial distribution on biodegradation

    Science.gov (United States)

    Schmidt, Susanne I.; Kreft, Jan-Ulrich; Mackay, Rae; Picioreanu, Cristian; Thullner, Martin

    2018-06-01

    Groundwater microorganisms hardly ever cover the solid matrix uniformly-instead they form micro-scale colonies. To which extent such colony formation limits the bioavailability and biodegradation of a substrate is poorly understood. We used a high-resolution numerical model of a single pore channel inhabited by bacterial colonies to simulate the transport and biodegradation of organic substrates. These high-resolution 2D simulation results were compared to 1D simulations that were based on effective rate laws for bioavailability-limited biodegradation. We (i) quantified the observed bioavailability limitations and (ii) evaluated the applicability of previously established effective rate concepts if microorganisms are heterogeneously distributed. Effective bioavailability reductions of up to more than one order of magnitude were observed, showing that the micro-scale aggregation of bacterial cells into colonies can severely restrict the bioavailability of a substrate and reduce in situ degradation rates. Effective rate laws proved applicable for upscaling when using the introduced effective colony sizes.

  19. Biophysical, infrastructural and social heterogeneities explain spatial distribution of waterborne gastrointestinal disease burden in Mexico City

    Science.gov (United States)

    Baeza, Andrés; Estrada-Barón, Alejandra; Serrano-Candela, Fidel; Bojórquez, Luis A.; Eakin, Hallie; Escalante, Ana E.

    2018-06-01

    Due to unplanned growth, large extension and limited resources, most megacities in the developing world are vulnerable to hydrological hazards and infectious diseases caused by waterborne pathogens. Here we aim to elucidate the extent of the relation between the spatial heterogeneity of physical and socio-economic factors associated with hydrological hazards (flooding and scarcity) and the spatial distribution of gastrointestinal disease in Mexico City, a megacity with more than 8 million people. We applied spatial statistics and multivariate regression analyses to high resolution records of gastrointestinal diseases during two time frames (2007–2009 and 2010–2014). Results show a pattern of significant association between water flooding events and disease incidence in the city center (lowlands). We also found that in the periphery (highlands), higher incidence is generally associated with household infrastructure deficiency. Our findings suggest the need for integrated and spatially tailored interventions by public works and public health agencies, aimed to manage socio-hydrological vulnerability in Mexico City.

  20. System of and method for transparent management of data objects in containers across distributed heterogenous resources

    Science.gov (United States)

    Moore, Reagan W.; Rajasekar, Arcot; Wan, Michael Y.

    2007-09-11

    A system of and method for maintaining data objects in containers across a network of distributed heterogeneous resources in a manner which is transparent to a client. A client request pertaining to containers is resolved by querying meta data for the container, processing the request through one or more copies of the container maintained on the system, updating the meta data for the container to reflect any changes made to the container as a result processing the re quest, and, if a copy of the container has changed, changing the status of the copy to indicate dirty status or synchronizing the copy to one or more other copies that may be present on the system.

  1. Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems

    Directory of Open Access Journals (Sweden)

    Hesam Izakian

    2009-07-01

    Full Text Available Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  2. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems

    Science.gov (United States)

    Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.

    2018-05-01

    This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.

  3. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    Science.gov (United States)

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  4. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    International Nuclear Information System (INIS)

    Onut, S; Kamber, M R; Altay, G

    2014-01-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time

  5. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    Science.gov (United States)

    Onut, S.; Kamber, M. R.; Altay, G.

    2014-03-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.

  6. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    Science.gov (United States)

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  7. Glycogen distribution in the microwave-fixed mouse brain reveals heterogeneous astrocytic patterns.

    Science.gov (United States)

    Oe, Yuki; Baba, Otto; Ashida, Hitoshi; Nakamura, Kouichi C; Hirase, Hajime

    2016-09-01

    In the brain, glycogen metabolism has been implied in synaptic plasticity and learning, yet the distribution of this molecule has not been fully described. We investigated cerebral glycogen of the mouse by immunohistochemistry (IHC) using two monoclonal antibodies that have different affinities depending on the glycogen size. The use of focused microwave irradiation yielded well-defined glycogen immunoreactive signals compared with the conventional periodic acid-Schiff method. The IHC signals displayed a punctate distribution localized predominantly in astrocytic processes. Glycogen immunoreactivity (IR) was high in the hippocampus, striatum, cortex, and cerebellar molecular layer, whereas it was low in the white matter and most of the subcortical structures. Additionally, glycogen distribution in the hippocampal CA3-CA1 and striatum had a 'patchy' appearance with glycogen-rich and glycogen-poor astrocytes appearing in alternation. The glycogen patches were more evident with large-molecule glycogen in young adult mice but they were hardly observable in aged mice (1-2 years old). Our results reveal brain region-dependent glycogen accumulation and possibly metabolic heterogeneity of astrocytes. GLIA 2016;64:1532-1545. © 2016 The Authors. Glia Published by Wiley Periodicals, Inc.

  8. Glycogen distribution in the microwave‐fixed mouse brain reveals heterogeneous astrocytic patterns

    Science.gov (United States)

    Baba, Otto; Ashida, Hitoshi; Nakamura, Kouichi C.

    2016-01-01

    In the brain, glycogen metabolism has been implied in synaptic plasticity and learning, yet the distribution of this molecule has not been fully described. We investigated cerebral glycogen of the mouse by immunohistochemistry (IHC) using two monoclonal antibodies that have different affinities depending on the glycogen size. The use of focused microwave irradiation yielded well‐defined glycogen immunoreactive signals compared with the conventional periodic acid‐Schiff method. The IHC signals displayed a punctate distribution localized predominantly in astrocytic processes. Glycogen immunoreactivity (IR) was high in the hippocampus, striatum, cortex, and cerebellar molecular layer, whereas it was low in the white matter and most of the subcortical structures. Additionally, glycogen distribution in the hippocampal CA3‐CA1 and striatum had a ‘patchy’ appearance with glycogen‐rich and glycogen‐poor astrocytes appearing in alternation. The glycogen patches were more evident with large‐molecule glycogen in young adult mice but they were hardly observable in aged mice (1–2 years old). Our results reveal brain region‐dependent glycogen accumulation and possibly metabolic heterogeneity of astrocytes. GLIA 2016;64:1532–1545 PMID:27353480

  9. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  10. Practical private database queries based on a quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Jakobi, Markus; Simon, Christoph; Gisin, Nicolas; Bancal, Jean-Daniel; Branciard, Cyril; Walenta, Nino; Zbinden, Hugo

    2011-01-01

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions in order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.

  11. Influence of initial heterogeneities and recharge limitations on the evolution of aperture distributions in carbonate aquifers

    Directory of Open Access Journals (Sweden)

    B. Hubinger

    2011-12-01

    Full Text Available Karst aquifers evolve where the dissolution of soluble rocks causes the enlargement of discrete pathways along fractures or bedding planes, thus creating highly conductive solution conduits. To identify general interrelations between hydrogeological conditions and the properties of the evolving conduit systems the aperture-size frequency distributions resulting from generic models of conduit evolution are analysed. For this purpose, a process-based numerical model coupling flow and rock dissolution is employed. Initial protoconduits are represented by tubes with log-normally distributed aperture sizes with a mean μ0 = 0.5 mm for the logarithm of the diameters. Apertures are spatially uncorrelated and widen up to the metre range due to dissolution by chemically aggressive waters. Several examples of conduit development are examined focussing on influences of the initial heterogeneity and the available amount of recharge. If the available recharge is sufficiently high the evolving conduits compete for flow and those with large apertures and high hydraulic gradients attract more and more water. As a consequence, the positive feedback between increasing flow and dissolution causes the breakthrough of a conduit pathway connecting the recharge and discharge sides of the modelling domain. Under these competitive flow conditions dynamically stable bimodal aperture distributions are found to evolve, i.e. a certain percentage of tubes continues to be enlarged while the remaining tubes stay small-sized. The percentage of strongly widened tubes is found to be independent of the breakthrough time and decreases with increasing heterogeneity of the initial apertures and decreasing amount of available water. If the competition for flow is suppressed because the availability of water is strongly limited breakthrough of a conduit pathway is inhibited and the conduit pathways widen very slowly. The resulting aperture distributions are found to be

  12. The 2005 Tarapaca, Chile, Intermediate-depth Earthquake: Evidence of Heterogeneous Fluid Distribution Across the Plate?

    Science.gov (United States)

    Kuge, K.; Kase, Y.; Urata, Y.; Campos, J.; Perez, A.

    2008-12-01

    The physical mechanism of intermediate-depth earthquakes remains unsolved, and dehydration embrittlement in subducting plates is a candidate. An earthquake of Mw7.8 occurred at a depth of 115 km beneath Tarapaca, Chile. In this study, we suggest that the earthquake rupture can be attributed to heterogeneous fluid distribution across the subducting plate. The distribution of aftershocks suggests that the earthquake occurred on the subhorizontal fault plane. By modeling regional waveforms, we determined the spatiotemporal distribution of moment release on the fault plane, testing a different suite of velocity models and hypocenters. Two patches of high slip were robustly obtained, although their geometry tends to vary. We tested the results separately by computing the synthetic teleseismic P and pP waveforms. Observed P waveforms are generally modeled, whereas two pulses of observed pP require that the two patches are in the WNW-ESE direction. From the selected moment-release evolution, the dynamic rupture model was constructed by means of Mikumo et al. (1998). The model shows two patches of high dynamic stress drop. Notable is a region of negative stress drop between the two patches. This was required so that the region could lack wave radiation but propagate rupture from the first to the second patches. We found from teleseismic P that the radiation efficiency of the earthquake is relatively small, which can support the existence of negative stress drop during the rupture. The heterogeneous distribution of stress drop that we found can be caused by fluid. The T-P condition of dehydration explains the locations of double seismic zones (e.g. Hacker et al., 2003). The distance between the two patches of high stress drop agrees with the distance between the upper and lower layers of the double seismic zone observed in the south (Rietbrock and Waldhauser, 2004). The two patches can be parts of the double seismic zone, indicating the existence of fluid from dehydration

  13. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    Science.gov (United States)

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  14. Three-dimensional cluster formation and structure in heterogeneous dose distribution of intensity modulated radiation therapy.

    Science.gov (United States)

    Chao, Ming; Wei, Jie; Narayanasamy, Ganesh; Yuan, Yading; Lo, Yeh-Chi; Peñagarícano, José A

    2018-05-01

    To investigate three-dimensional cluster structure and its correlation to clinical endpoint in heterogeneous dose distributions from intensity modulated radiation therapy. Twenty-five clinical plans from twenty-one head and neck (HN) patients were used for a phenomenological study of the cluster structure formed from the dose distributions of organs at risks (OARs) close to the planning target volumes (PTVs). Initially, OAR clusters were searched to examine the pattern consistence among ten HN patients and five clinically similar plans from another HN patient. Second, clusters of the esophagus from another ten HN patients were scrutinized to correlate their sizes to radiobiological parameters. Finally, an extensive Monte Carlo (MC) procedure was implemented to gain deeper insights into the behavioral properties of the cluster formation. Clinical studies showed that OAR clusters had drastic differences despite similar PTV coverage among different patients, and the radiobiological parameters failed to positively correlate with the cluster sizes. MC study demonstrated the inverse relationship between the cluster size and the cluster connectivity, and the nonlinear changes in cluster size with dose thresholds. In addition, the clusters were insensitive to the shape of OARs. The results demonstrated that the cluster size could serve as an insightful index of normal tissue damage. The clinical outcome of the same dose-volume might be potentially different. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  16. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  17. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  18. Determination of stress distribution in III-V single crystal layers for heterogeneous integration applications

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.; Hayashi, S. [Dept. of Materials Science and Engineering, University of California, Los Angeles, CA 90095 (United States); Goorsky, M.S.; Sandhu, R.; Chang-Chien, P.; Gutierrez-Aitken, A.; Tsai, R. [Northrop Grumman Space Technology, Redondo Beach, CA 90278 (United States); Noori, A.; Poust, B. [Dept. of Materials Science and Engineering, University of California, Los Angeles, CA 90095 (United States); Northrop Grumman Space Technology, Redondo Beach, CA 90278 (United States)

    2007-08-15

    Double crystal X-ray diffraction imaging and a variable temperature stage are employed to determine the stress distribution in heterogeneous wafer bonded layers though the superposition of images produced at different rocking curve angles. The stress distribution in InP layers transferred to a silicon substrate at room temperature exhibits an anticlastic deformation, with different regions of the wafer experiencing different signs of curvature. Measurements at elevated temperatures ({<=}125 C) reveals that differences in thermal expansion coefficients dominate the stress and that interfacial particulates introduce very high local stress gradients that increase with increased temperature. For thinned GaAs substrates (100 {mu}m) bonded using patterned metal interlayers to a separate GaAs substrate at {approx}200 C, residual stresses are produced at room temperature due to local stress points from metallization contacts and vias and the complex stress patterns can be observed using the diffraction imaging technique. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  19. Data Mining in Distributed Database of the First Egyptian Thermal Research Reactor (ETRR-1)

    International Nuclear Information System (INIS)

    Abo Elez, R.H.; Ayad, N.M.A.; Ghuname, A.A.A.

    2006-01-01

    Distributed database (DDB)technology application systems are growing up to cover many fields an domains, and at different levels. the aim of this paper is to shade some lights on applying the new technology of distributed database on the ETRR-1 operation data logged by the data acquisition system (DACQUS)and one can extract a useful knowledge. data mining with scientific methods and specialize tools is used to support the extraction of useful knowledge from the rapidly growing volumes of data . there are many shapes and forms for data mining methods. predictive methods furnish models capable of anticipating the future behavior of quantitative or qualitative database variables. when the relationship between the dependent an independent variables is nearly liner, linear regression method is the appropriate data mining strategy. so, multiple linear regression models have been applied to a set of data samples of the ETRR-1 operation data, using least square method. the results show an accurate analysis of the multiple linear regression models as applied to the ETRR-1 operation data

  20. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    Science.gov (United States)

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  1. Heterogeneous distribution of a diffusional tracer in the aortic wall of normal and atherosclerotic rabbits

    International Nuclear Information System (INIS)

    Tsutsui, H.; Tomoike, H.; Nakamura, M.

    1990-01-01

    Tracer distribution as an index of nutritional support across the thoracic and abdominal aortas in rabbits in the presence or absence of atherosclerotic lesions was evaluated using [ 14 C]antipyrine, a metabolically inert, diffusible indicator. Intimal plaques were produced by endothelial balloon denudation of the thoracic aorta and a 1% cholesterol diet. After a steady intravenous infusion of 200 microCi of [ 14 C]antipyrine for 60 seconds, thoracic and abdominal aortas and the heart were excised, and autoradiograms of 20-microns-thick sections were quantified, using microcomputer-aided densitometry. Regional radioactivity and regional diffusional support, as an index of nutritional flow estimated from the timed collections of arterial blood, was 367 and 421 nCi.g-1 (82 and 106 ml.min-1.100 g-1) in thoracic aortic media of the normal and atherosclerotic rabbits, respectively. Radioactivity at the thickened intima was 179 nCi.g-1 (p less than 0.01 versus media). The gruel was noted at a deeper site within the thickened intima, and diffusional support here was 110 nCi.g-1 (p less than 0.01 versus an average radioactivity at the thickened intima). After ligating the intercostal arteries, regional tracer distribution in the media beneath the fibrofatty lesion, but not the plaque-free intima, was reduced to 46%. Thus, in the presence of advanced intimal thickening, the heterogeneous distribution of diffusional flow is prominent across the vessel wall, and abluminal routes are crucial to meet the increased demands of nutritional requirements

  2. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  3. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database

    International Nuclear Information System (INIS)

    Ochs, Michael; Saito, Yoshihiko; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Yui, Mikazu

    2007-03-01

    Japan Atomic Energy Agency (JAEA) has developed the sorption database (JNC-SDB) for bentonite and rocks in order to assess the retardation property of important radioactive elements in natural and engineered barriers in the H12 report. The database includes distribution coefficient (K d ) of important radionuclides. The K d values in the SDB are about 20,000 data. The SDB includes a great variety of K d and additional key information from many different literatures. Accordingly, the classification guideline and classification system were developed in order to evaluate the reliability of each K d value (Th, Pa, U, Np, Pu, Am, Cm, Cs, Ra, Se, Tc on bentonite). The reliability of 3740 K d values are evaluated and categorized. (author)

  4. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  5. Heterogeneous Cytoskeletal Force Distribution Delineates the Onset Ca2+ Influx Under Fluid Shear Stress in Astrocytes

    Directory of Open Access Journals (Sweden)

    Mohammad M. Maneshi

    2018-03-01

    Full Text Available Mechanical perturbations increase intracellular Ca2+ in cells, but the coupling of mechanical forces to the Ca2+ influx is not well understood. We used a microfluidic chamber driven with a high-speed pressure servo to generate defined fluid shear stress to cultured astrocytes, and simultaneously measured cytoskeletal forces using a force sensitive actinin optical sensor and intracellular Ca2+. Fluid shear generated non-uniform forces in actinin that critically depended on the stimulus rise time emphasizing the presence of viscoelasticity in the activating sequence. A short (ms shear pulse with fast rise time (2 ms produced an immediate increase in actinin tension at the upstream end of the cell with minimal changes at the downstream end. The onset of Ca2+ rise began at highly strained areas. In contrast to stimulus steps, slow ramp stimuli produced uniform forces throughout the cells and only a small Ca2+ response. The heterogeneity of force distribution is exaggerated in cells having fewer stress fibers and lower pre-tension in actinin. Disruption of cytoskeleton with cytochalasin-D (Cyt-D eliminated force gradients, and in those cells Ca2+ elevation started from the soma. Thus, Ca2+ influx with a mechanical stimulus depends on local stress within the cell and that is time dependent due to viscoelastic mechanics.

  6. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    Science.gov (United States)

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  7. Heat transfer enhancement in a natural draft dry cooling tower under crosswind operation with heterogeneous water distribution

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Mohsen; Amooie, Hossein [Bu-Ali Sina Univ., Hamedan (Iran, Islamic Republic of). Dept. of Mechanical Engineering

    2016-04-15

    Crosswind significantly decreases cooling efficiency of a natural draft dry cooling tower. The possibility of improving cooling efficiency with heterogeneous water distribution within the cooling tower radiators under crosswind condition is analysed. A CFD approach was used to model the flow field and heat transfer phenomena within the cooling tower and airflow surrounding the cooling tower. A mathematical model was developed from various CFD results. Having used a trained Genetic Algorithm with the result of mathematical model, the best water distribution was found among the others. Remodeling the best water distribution with the CFD approach showed that the highest enhancement of the heat transfer compared to the usual uniform water distribution.

  8. Heat transfer enhancement in a natural draft dry cooling tower under crosswind operation with heterogeneous water distribution

    International Nuclear Information System (INIS)

    Goodarzi, Mohsen; Amooie, Hossein

    2016-01-01

    Crosswind significantly decreases cooling efficiency of a natural draft dry cooling tower. The possibility of improving cooling efficiency with heterogeneous water distribution within the cooling tower radiators under crosswind condition is analysed. A CFD approach was used to model the flow field and heat transfer phenomena within the cooling tower and airflow surrounding the cooling tower. A mathematical model was developed from various CFD results. Having used a trained Genetic Algorithm with the result of mathematical model, the best water distribution was found among the others. Remodeling the best water distribution with the CFD approach showed that the highest enhancement of the heat transfer compared to the usual uniform water distribution.

  9. Exploiting Distributed, Heterogeneous and Sensitive Data Stocks while Maintaining the Owner's Data Sovereignty.

    Science.gov (United States)

    Lablans, M; Kadioglu, D; Muscholl, M; Ückert, F

    2015-01-01

    To achieve statistical significance in medical research, biological or data samples from several bio- or databanks often need to be complemented by those of other institutions. For that purpose, IT-based search services have been established to locate datasets matching a given set of criteria in databases distributed across several institutions. However, previous approaches require data owners to disclose information about their samples, raising a barrier for their participation in the network. To devise a method to search distributed databases for datasets matching a given set of criteria while fully maintaining their owner's data sovereignty. As a modification to traditional federated search services, we propose the decentral search, which allows the data owner a high degree of control. Relevant data are loaded into local bridgeheads, each under their owner's sovereignty. Researchers can formulate criteria sets along with a project proposal using a central search broker, which then notifies the bridgeheads. The criteria are, however, treated as an inquiry rather than a query: Instead of responding with results, bridgeheads notify their owner and wait for his/her decision regarding whether and what to answer based on the criteria set, the matching datasets and the specific project proposal. Without the owner's explicit consent, no data leaves his/her institution. The decentral search has been deployed in one of the six German Centers for Health Research, comprised of eleven university hospitals. In the process, compliance with German data protection regulations has been confirmed. The decentral search also marks the centerpiece of an open source registry software toolbox aiming to build a national registry of rare diseases in Germany. While the sacrifice of real-time answers impairs some use-cases, it leads to several beneficial side effects: improved data protection due to data parsimony, tolerance for incomplete data schema mappings and flexibility with regard

  10. The response-time distribution in a real-time database with optimistic concurrency control and constant execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  11. The response-time distribution in a real-time database with optimistic concurrency control and exponential execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction takes an exponential execution

  12. The mining of toxin-like polypeptides from EST database by single residue distribution analysis.

    Science.gov (United States)

    Kozlov, Sergey; Grishin, Eugene

    2011-01-31

    Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  13. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  14. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  15. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  16. Spatiotemporal Distribution of β-Amyloid in Alzheimer Disease Is the Result of Heterogeneous Regional Carrying Capacities.

    Science.gov (United States)

    Whittington, Alex; Sharp, David J; Gunn, Roger N

    2018-05-01

    β-amyloid (Aβ) accumulation in the brain is 1 of 2 pathologic hallmarks of Alzheimer disease (AD), and the spatial distribution of Aβ has been studied extensively ex vivo. Methods: We applied mathematical modeling to Aβ in vivo PET imaging data to investigate competing theories of Aβ spread in AD. Results: Our results provided evidence that Aβ accumulation starts in all brain regions simultaneously and that its spatiotemporal distribution is due to heterogeneous regional carrying capacities (regional maximum possible concentration of Aβ) for the aggregated protein rather than to longer-term spreading from seed regions. Conclusion: The in vivo spatiotemporal distribution of Aβ in AD can be mathematically modeled using a logistic growth model in which the Aβ carrying capacity is heterogeneous across the brain but the exponential growth rate and time of half maximal Aβ concentration are constant. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  17. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  18. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  19. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  20. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  1. A heterogeneous boron distribution in soil influences the poplar root system architecture development

    Science.gov (United States)

    Rees, R.; Robinson, B. H.; Hartmann, S.; Lehmann, E.; Schulin, R.

    2009-04-01

    Poplars are well suited for the phytomanagement of boron (B)-contaminated sites, due to their high transpiration rate and tolerance to elevated soil B concentrations. However, the uptake and the fate of B in poplar stands are not well understood. This information is crucial to improve the design of phytomanagement systems, where the primary role of poplars is to reduce B leaching by reducing the water flux through the contaminated material. Like other trace elements, B occurs heterogeneously in soils. Concentrations can differ up to an order of magnitude within centimetres. These gradients affect plant root growth and thus via preferential flow along the roots water and mass transport in soils to ground and surface waters. Generally there are three possible reactions of plant roots to patches with elevated trace element concentrations in soils: indifference, avoidance, or foraging. While avoidance or indifference might seem to be the most obvious strategies, foraging cannot be excluded a priori, because of the high demand of poplars for B compared to other tree species. We aimed to determine the rooting strategies of poplars in soils where B is either homo- or heterogeneously distributed. We planted 5 cm cuttings of Populus tremula var. Birmensdorf clones in aluminum (Al) containers with internal dimensions of 64 x 67 x 1.2 cm. The soil used was subsoil from northern Switzerland with a naturally low B and organic C concentration. We setup two treatments and a control with three replicates each. We spiked a bigger and a smaller portion of the soil with the same amount of B(OH)3-salt, in order to obtain soil concentrations of 7.5 mg B kg-1 and 20 mg B kg-1. We filled the containers with (a) un-spiked soil, (b) the 7.5 mg B kg-1 soil and (c) heterogeneously. The heterogeneous treatment consisted of one third 20 mg B kg-1 soil and two thirds control soil. We grew the poplars in a small greenhouse over 2 months and from then on in a climate chamber for another 3 months

  2. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  3. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  4. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  5. Specification of electron beam quality from ionization distribution in central axis and a study about heterogenity effects in these distributions

    International Nuclear Information System (INIS)

    Fernandes, R.F.

    1984-01-01

    Some parameters on physical and terapeutic properties of the electron beams generated by ''nevratron XII'' accelerator of the Hospital of Medicine in Ribeirao Preto - USP, Brasil are evaluated. The effects from heterogenities into an irradiated mean, called ''phantom'' or simulator are studied. (M.J.C.) [pt

  6. Tracer test modeling for characterizing heterogeneity and local scale residence time distribution in an artificial recharge site.

    Science.gov (United States)

    Valhondo, Cristina; Martinez-Landa, Lurdes; Carrera, Jesús; Hidalgo, Juan J.; Ayora, Carlos

    2017-04-01

    Artificial recharge of aquifers (AR) is a standard technique to replenish and enhance groundwater resources, that have widely been used due to the increasing demand of quality water. AR through infiltration basins consists on infiltrate surface water, that might be affected in more or less degree by treatment plant effluents, runoff and others undesirables water sources, into an aquifer. The water quality enhances during the passage through the soil and organic matter, nutrients, organic contaminants, and bacteria are reduced mainly due to biodegradation and adsorption. Therefore, one of the goals of AR is to ensure a good quality status of the aquifer even if lesser quality water is used for recharge. Understand the behavior and transport of the potential contaminants is essential for an appropriate management of the artificial recharge system. The knowledge of the flux distribution around the recharge system and the relationship between the recharge system and the aquifer (area affected by the recharge, mixing ratios of recharged and native groundwater, travel times) is essential to achieve this goal. Evaluate the flux distribution is not always simple because the complexity and heterogeneity of natural systems. Indeed, it is not so much regulate by hydraulic conductivity of the different geological units as by their continuity and inter-connectivity particularly in the vertical direction. In summary for an appropriate management of an artificial recharge system it is needed to acknowledge the heterogeneity of the media. Aiming at characterizing the residence time distribution (RTDs) of a pilot artificial recharge system and the extent to which heterogeneity affects RTDs, we performed and evaluated a pulse injection tracer test. The artificial recharge system was simulated as a multilayer model which was used to evaluate the measured breakthrough curves at six monitoring points. Flow and transport parameters were calibrated under two hypotheses. The first

  7. Optimization of an algorithm for 3D calculation of radiation dose distribution in heterogeneous media for use in radiotherapy planning

    International Nuclear Information System (INIS)

    Perles, L.A.; Chinellato, C.D.; Rocha, J.R.O.

    2001-01-01

    In this paper has been presented a modification of a algorithm for three-dimensional (3D) radiation dose distribution in heterogeneous media by convolutions. This modification has maintained good accordance between calculated and simulated data in EGS4 code. The results of algorithm have been compared with commercial program PLATO, where have been noticed inconsistency for equivalent density regions in a muscle-lung-muscle interface system

  8. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  9. Imaging geochemical heterogeneities using inverse reactive transport modeling: An example relevant for characterizing arsenic mobilization and distribution

    DEFF Research Database (Denmark)

    Fakhreddine, Sarah; Lee, Jonghyun; Kitanidis, Peter K.

    2016-01-01

    groundwater parameters. Specifically, we simulate the mobilization of arsenic via kinetic oxidative dissolution of As-bearing pyrite due to dissolved oxygen in the ambient groundwater, which is an important mechanism for arsenic release in groundwater both under natural conditions and engineering applications......The spatial distribution of reactive minerals in the subsurface is often a primary factor controlling the fate and transport of contaminants in groundwater systems. However, direct measurement and estimation of heterogeneously distributed minerals are often costly and difficult to obtain. While...

  10. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    One of the most common challenges in hydrodynamic modelling is the trade off one must make between highly resolved simulations and the time required for their computation. In the particular case of urban floods, modelers are often forced to simplify the complex geometries of the problem, or to implicitly include some of its hydrodynamic effects, due to the typically very large spatial scales involved and limited computational resources. At CEris - Instituto Superior Técnico, Universidade de Lisboa - the STAV-2D shallow-water model, particularly suited for strong transient flows in complex and dynamic geometries, has been under development for the past recent years (Canelas et al., 2013 & Conde et al., 2013). The model is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV-2D features solid transport in both Euleran and Lagrangian forms, with the first aiming at describing the transport of fine natural sediments and the latter aimed at large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). This work presents our most recent effort in STAV-2D: the re-design of the code in a modern Object-Oriented parallel framework for heterogeneous computations in CPUs and GPUs. The programming language of choice for this re-design was C++, due to its wide support of established and emerging parallel programming interfaces. The current implementation of STAV-2D provides two different levels of parallel granularity: inter-node and intra-node. Inter-node parallelism is achieved by distributing a simulation across a set of worker nodes, with communication between nodes being explicitly managed through MPI. At this level, the main difficulty is associated with the

  11. On the spatial distribution of the transpiration and soil moisture of a Mediterranean heterogeneous ecosystem in water-limited conditions.

    Science.gov (United States)

    Curreli, Matteo; Corona, Roberto; Montaldo, Nicola; Albertson, John D.; Oren, Ram

    2014-05-01

    Mediterranean ecosystems are characterized by a strong heterogeneity, and often by water-limited conditions. In these conditions contrasting plant functional types (PFT, e.g. grass and woody vegetation) compete for the water use. Both the vegetation cover spatial distribution and the soil properties impact the soil moisture (SM) spatial distribution. Indeed, vegetation cover density and type affects evapotranspiration (ET), which is the main lack of the soil water balance in these ecosystems. With the objective to carefully estimate SM and ET spatial distribution in a Mediterranean water-limited ecosystem and understanding SM and ET relationships, an extended field campaign is carried out. The study was performed in a heterogeneous ecosystem in Orroli, Sardinia (Italy). The experimental site is a typical Mediterranean ecosystem where the vegetation is distributed in patches of woody vegetation (wild olives mainly) and grass. Soil depth is low and spatially varies between 10 cm and 40 cm, without any correlation with the vegetation spatial distribution. ET, land-surface fluxes and CO2 fluxes are estimated by an eddy covariance technique based micrometeorological tower. But in heterogeneous ecosystems a key assumption of the eddy covariance theory, the homogeneity of the surface, is not preserved and the ET estimate may be not correct. Hence, we estimate ET of the woody vegetation using the thermal dissipation method (i.e. sap flow technique) for comparing the two methodologies. Due the high heterogeneity of the vegetation and soil properties of the field a total of 54 sap flux sensors were installed. 14 clumps of wild olives within the eddy covariance footprint were identified as the most representative source of flux and they were instrumented with the thermal dissipation probes. Measurements of diameter at the height of sensor installation (height of 0.4 m above ground) were recorded in all the clumps. Bark thickness and sapwood depth were measured on several

  12. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    Science.gov (United States)

    2011-08-09

    heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale flop rates by the end of the decade. In this framework...and Lamb. Page 10 of 10 UNCLASSIFIED [3] Skaugen, K., Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com

  13. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  14. In vivo study on influence of the heterogeneity of tissues in the dose distribution in high energy X ray therapy

    International Nuclear Information System (INIS)

    Aldred, M.A.

    1987-01-01

    Several authors investigated the effect of the heterogeneity of tissue in the dose distribution in a radiation-therapy. Practically all of them carried out ''in vitro'' measurements using a solid body immersed in a water phantom, in order to simulate the inhomogeneity, such as bone, air cavity, etc. In the present work, ''in vivo'' measurements were performed utilizing thermoluminescent dosimeters, whose appropriateness and convenience are well known. Eight patients at Instituto de Radioterapia Oswaldo Cruz were selected, that were under irradiation treatments in their pelvic region. The ratio between body entry radiation dose and the corresponding exit dose, when compared to the same ratio for a homogeneous phantom, gives the influence of the heterogeneity of the tissue the radiation crosses. The results found in those eight patients have shown that ''in vivo'' measurements present a ratio about 8% smaller that in homogeneous phantom case. (author) [pt

  15. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  16. Patterns in the distribution of vegetation in paramo areas: heterogeneity and spacial dependence

    OpenAIRE

    Arellano-P., Henry; Rangel-CH, J. Orlando

    2012-01-01

    Two methods of exploratory spatial data analysis (ESDA), analysis of spatial heterogeneity and dependence (auto-correlation), - - were applied to the cover patterns from ten paramo localities in the Central and Eastern cordilleras of Colombia. Among the localities studied, the high montane region of the Serrania de Perija, the paramo region of the Los Nevados National Park, and the paramo region under management of CORPOGUAVIO showed a good state of conservation and satisfactory level of conn...

  17. Spatial heterogeneity and the distribution of bromeliad pollinators in the Atlantic Forest

    Science.gov (United States)

    Varassin, Isabela Galarda; Sazima, Marlies

    2012-08-01

    Interactions between plants and their pollinators are influenced by environmental heterogeneity, resulting in small-scale variations in interactions. This may influence pollinator co-existence and plant reproductive success. This study, conducted at the Estação Biológica de Santa Lúcia (EBSL), a remnant of the Atlantic Forest in southeastern Brazil, investigated the effect of small-scale spatial variations on the interactions between bromeliads and their pollinators. Overall, hummingbirds pollinated 19 of 23 bromeliad species, of which 11 were also pollinated by bees and/or butterflies. However, spatial heterogeneity unrelated to the spatial location of plots or bromeliad species abundance influenced the presence of pollinators. Hummingbirds were the most ubiquitous pollinators at the high-elevation transect, with insect participation clearly declining as transect elevation increased. In the redundancy analysis, the presence of the hummingbird species Phaethornis eurynome, Phaethornis squalidus, Ramphodon naevius, and Thalurania glaucopis, and the butterfly species Heliconius erato and Heliconius nattereri in each plot was correlated with environmental factors such as bromeliad and tree abundance, and was also correlated with horizontal diversity. Since plant-pollinator interactions varied within the environmental mosaics at the study site, this small-scale environmental heterogeneity may relax competition among pollinators, and may explain the high diversity of bromeliads and pollinators generally found in the Atlantic Forest.

  18. An optimized approach for simultaneous horizontal data fragmentation and allocation in Distributed Database Systems (DDBSs).

    Science.gov (United States)

    Amer, Ali A; Sewisy, Adel A; Elgendy, Taha M A

    2017-12-01

    With the substantial ever-upgrading advancement in data and information management field, Distributed Database System (DDBS) is still proven to be the most growingly-demanded tool to handle the accompanied constantly-piled volumes of data. However, the efficiency and adequacy of DDBS is profoundly correlated with the reliability and precision of the process in which DDBS is set to be designed. As for DDBS design, thus, several strategies have been developed, in literature, to be used in purpose of promoting DDBS performance. Off these strategies, data fragmentation, data allocation and replication, and sites clustering are the most immensely-used efficacious techniques that otherwise DDBS design and rendering would be prohibitively expensive. On one hand, an accurate well-architected data fragmentation and allocation is bound to incredibly increase data locality and promote the overall DDBS throughputs. On the other hand, finding a practical sites clustering process is set to contribute remarkably in reducing the overall Transmission Costs (TC). Consequently, consolidating all these strategies into one single work is going to undoubtedly satisfy a massive growth in DDBS influence. In this paper, therefore, an optimized heuristic horizontal fragmentation and allocation approach is meticulously developed. All the drawn-above strategies are elegantly combined into a single effective approach so as to an influential solution for DDBS productivity promotion is set to be markedly fulfilled. Most importantly, an internal and external evaluations are extensively illustrated. Obviously, findings of conducted experiments have maximally been recorded to be in favor of DDBS performance betterment.

  19. Quantitative multi-scale analysis of mineral distributions and fractal pore structures for a heterogeneous Junger Basin shale

    International Nuclear Information System (INIS)

    Wang, Y.D.; Ren, Y.Q.; Hu, T.; Deng, B.; Xiao, T.Q.; Liu, K.Y.; Yang, Y.S.

    2016-01-01

    Three dimensional (3D) characterization of shales has recently attracted wide attentions in relation to the growing importance of shale oil and gas. Obtaining a complete 3D compositional distribution of shale has proven to be challenging due to its multi-scale characteristics. A combined multi-energy X-ray micro-CT technique and data-constrained modelling (DCM) approach has been used to quantitatively investigate the multi-scale mineral and porosity distributions of a heterogeneous shale from the Junger Basin, northwestern China by sub-sampling. The 3D sub-resolution structures of minerals and pores in the samples are quantitatively obtained as the partial volume fraction distributions, with colours representing compositions. The shale sub-samples from two areas have different physical structures for minerals and pores, with the dominant minerals being feldspar and dolomite, respectively. Significant heterogeneities have been observed in the analysis. The sub-voxel sized pores form large interconnected clusters with fractal structures. The fractal dimensions of the largest clusters for both sub-samples were quantitatively calculated and found to be 2.34 and 2.86, respectively. The results are relevant in quantitative modelling of gas transport in shale reservoirs

  20. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  1. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  2. Adaptive Monitoring and Control Architectures for Power Distribution Grids over Heterogeneous ICT Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Hägerling, Christian; Kurtz, Fabian M.

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of today’s distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the q......The expected growth in distributed generation will significantly affect the operation and control of today’s distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...... to the reliability due to the stochastic behaviour found in such networks. Therefore, key concepts are presented in this paper targeting the support of proper smart grid control in these network environments. An overview on the required Information and Communication Technology (ICT) architecture and its...

  3. Employing Measures of Heterogeneity and an Object-Based Approach to Extrapolate Tree Species Distribution Data

    Directory of Open Access Journals (Sweden)

    Trevor G. Jones

    2014-07-01

    Full Text Available Information derived from high spatial resolution remotely sensed data is critical for the effective management of forested ecosystems. However, high spatial resolution data-sets are typically costly to acquire and process and usually provide limited geographic coverage. In contrast, moderate spatial resolution remotely sensed data, while not able to provide the spectral or spatial detail required for certain types of products and applications, offer inexpensive, comprehensive landscape-level coverage. This study assessed using an object-based approach to extrapolate detailed tree species heterogeneity beyond the extent of hyperspectral/LiDAR flightlines to the broader area covered by a Landsat scene. Using image segments, regression trees established ecologically decipherable relationships between tree species heterogeneity and the spectral properties of Landsat segments. The spectral properties of Landsat bands 4 (i.e., NIR: 0.76–0.90 µm, 5 (i.e., SWIR: 1.55–1.75 µm and 7 (SWIR: 2.08–2.35 µm were consistently selected as predictor variables, explaining approximately 50% of variance in richness and diversity. Results have important ramifications for ongoing management initiatives in the study area and are applicable to wide range of applications.

  4. Agent-based Integration of Complex and Heterogeneous Distributed Energy Resources in Virtual Power Plants

    DEFF Research Database (Denmark)

    Clausen, Anders; Umair, Aisha; Demazeau, Yves

    2017-01-01

    A Virtual Power Plant aggregates several Distributed Energy Resources in order to expose them as a single, controllable entity. This enables smaller Distributed Energy Resources to take part in Demand Response programs which traditionally only targeted larger consumers. To date, models for Virtual...

  5. Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogenous enterprise

    CERN Document Server

    Eisenberg, C

    2003-01-01

    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as cen...

  6. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  7. A Distributed Database System for Developing Ontological and Lexical Resources in Harmony

    NARCIS (Netherlands)

    Horák, A.; Vossen, P.T.J.M.; Rambousek, A.; Gelbukh, A.

    2010-01-01

    In this article, we present the basic ideas of creating a new information-rich lexical database of Dutch, called Cornetto, that is interconnected with corresponding English synsets and a formal ontology. The Cornetto database is based on two existing electronic dictionaries - the Referentie Bestand

  8. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  9. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  10. Land surface temperature representativeness in a heterogeneous area through a distributed energy-water balance model and remote sensing data

    Directory of Open Access Journals (Sweden)

    C. Corbari

    2010-10-01

    Full Text Available Land surface temperature is the link between soil-vegetation-atmosphere fluxes and soil water content through the energy water balance. This paper analyses the representativeness of land surface temperature (LST for a distributed hydrological water balance model (FEST-EWB using LST from AHS (airborne hyperspectral scanner, with a spatial resolution between 2–4 m, LST from MODIS, with a spatial resolution of 1000 m, and thermal infrared radiometric ground measurements that are compared with the representative equilibrium temperature that closes the energy balance equation in the distributed hydrological model.

    Diurnal and nocturnal images are analyzed due to the non stable behaviour of the thermodynamic temperature and to the non linear effects induced by spatial heterogeneity.

    Spatial autocorrelation and scale of fluctuation of land surface temperature from FEST-EWB and AHS are analysed at different aggregation areas to better understand the scale of representativeness of land surface temperature in a hydrological process.

    The study site is the agricultural area of Barrax (Spain that is a heterogeneous area with a patchwork of irrigated and non irrigated vegetated fields and bare soil. The used data set was collected during a field campaign from 10 to 15 July 2005 in the framework of the SEN2FLEX project.

  11. Distributed Data Management on the Petascale using Heterogeneous Grid Infrastructures with DQ2

    CERN Document Server

    Branco, M; Salgado, P; Lassnig, M

    2008-01-01

    We describe Don Quijote 2 (DQ2), a new approach to the management of large scientific datasets by a dedicated middleware. This middleware is designed to handle the data organisation and data movement on the petascale for the High-Energy Physics Experiment ATLAS at CERN. DQ2 is able to maintain a well-defined quality of service in a scalable way, guarantees data consistency for the collaboration and bridges the gap between EGEE, OSG and NorduGrid infrastructures to enable true interoperability. DQ2 is specifically designed to support the access and management of large scientific datasets produced by the ATLAS experiment using heterogeneous Grid infrastructures. The DQ2 middleware manages those datasets with global services, local site services and enduser interfaces. The global services, or central catalogues, are responsible for the mapping of individual files onto DQ2 datasets. The local site services are responsible for tracking files available on-site, managing data movement and guaranteeing consistency of...

  12. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  13. Non-invasive assessment of distribution volume ratios and binding potential: tissue heterogeneity and interindividually averaged time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)

    2004-04-01

    Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos

  14. A case study of heterogeneous fleet vehicle routing problem: Touristic distribution application in Alanya

    Directory of Open Access Journals (Sweden)

    Kenan Karagül

    2014-07-01

    Full Text Available In this study, Fleet Size and Mix Vehicle Routing Problem is considered in order to optimize the distribution of the tourists who have traveled between the airport and the hotels in the shortest distance by using the minimum cost. The initial solution space for the related methods are formed as a combination of Savings algorithm, Sweep algorithm and random permutation alignment. Then, two well-known solution methods named as Standard Genetic Algorithms and random search algorithms are used for changing the initial solutions. Computational power of the machine and heuristic algorithms are used instead of human experience and human intuition in order to solve the distribution problem of tourists coming to hotels in Alanya region from Antalya airport. For this case study, daily data of tourist distributions performed by an agency operating in Alanya region are considered. These distributions are then modeled as Vehicle Routing Problem to calculate the solutions for various applications. From the comparisons with the decision of a human expert, it is seen that the proposed methods produce better solutions with respect to human experience and insight. Random search method produces a solution more favorable in terms of time. As a conclusion, it is seen that, owing to the distribution plans offered by the obtained solutions, the agencies may reduce the costs by achieving savings up to 35%.

  15. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    Science.gov (United States)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  16. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  17. Explaining local-scale species distributions: relative contributions of spatial autocorrelation and landscape heterogeneity for an avian assemblage.

    Directory of Open Access Journals (Sweden)

    Brady J Mattsson

    Full Text Available Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition.

  18. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  19. Heterogeneous distribution of Zn stable isotopes in mice and applications to medical sciences

    Science.gov (United States)

    Moynier, F.; Fujii, T.; Shaw, A.; Le Borgne, M.

    2013-12-01

    Zinc is required for the function of more than 300 enzymes involved in many metabolic pathways, and is a vital micronutrient for living organisms. To investigate if Zn isotopes could be used to better understand metal homeostasis, as well as a biomarker for diseases, we assessed the distribution of natural Zn isotopes in various mouse tissues. We found that, with respect to Zn isotopes, most mouse organs are isotopically distinct and that the total range of variation within one mouse encompasses the variations observed in the Earth's crust. Therefore, biological activity must have a major impact on the distribution of Zn isotopes in inorganic materials. The most striking aspect of the data is that red blood cells and bones are enriched by ~0.5 per mil in 66Zn relative to 64Zn when compared to serum, and up to ~1 per mil when compared to the brain and liver. This fractionation is well explained by the equilibrium distribution of isotopes between different bonding environments of Zn in different organs. Differences in gender and genetic background did not appear to affect the isotopic distribution of Zn. Together, these results suggest that potential use of Zn isotopes as a tracer for dietary Zn, and for detecting disturbances in Zn metabolism due to pathological conditions.

  20. The online database MaarjAM reveals global and ecosystemic distribution patterns in arbuscular mycorrhizal fungi (Glomeromycota).

    Science.gov (United States)

    Opik, M; Vanatoa, A; Vanatoa, E; Moora, M; Davison, J; Kalwij, J M; Reier, U; Zobel, M

    2010-10-01

    • Here, we describe a new database, MaarjAM, that summarizes publicly available Glomeromycota DNA sequence data and associated metadata. The goal of the database is to facilitate the description of distribution and richness patterns in this group of fungi. • Small subunit (SSU) rRNA gene sequences and available metadata were collated from all suitable taxonomic and ecological publications. These data have been made accessible in an open-access database (http://maarjam.botany.ut.ee). • Two hundred and eighty-two SSU rRNA gene virtual taxa (VT) were described based on a comprehensive phylogenetic analysis of all collated Glomeromycota sequences. Two-thirds of VT showed limited distribution ranges, occurring in single current or historic continents or climatic zones. Those VT that associated with a taxonomically wide range of host plants also tended to have a wide geographical distribution, and vice versa. No relationships were detected between VT richness and latitude, elevation or vascular plant richness. • The collated Glomeromycota molecular diversity data suggest limited distribution ranges in most Glomeromycota taxa and a positive relationship between the width of a taxon's geographical range and its host taxonomic range. Inconsistencies between molecular and traditional taxonomy of Glomeromycota, and shortage of data from major continents and ecosystems, are highlighted.

  1. Spatial heterogeneity in resource distribution promotes facultative sociality in two trans-Saharan migratory birds.

    Directory of Open Access Journals (Sweden)

    Ainara Cortés-Avizanda

    Full Text Available BACKGROUND: Migrant populations must cope not only with environmental changes in different biomes, but also with the continuous constraints imposed by human-induced changes through landscape transformation and resource patchiness. Theoretical studies suggest that changes in food distribution can promote changes in the social arrangement of individuals without apparent adaptive value. Empirical research on this subject has only been performed at reduced geographical scales and/or for single species. However, the relative contribution of food patchiness and predictability, both in space and time, to abundance and sociality can vary among species, depending on their degree of flexibility. METHODOLOGY/PRINCIPAL FINDINGS: By means of constrained zero-inflated Generalized Additive Models we analysed the spatial distribution of two trans-Saharan avian scavengers that breed (Europe and winter (Africa sympatrically, in relation to food availability. In the summering grounds, the probability of finding large numbers of both species increases close to predictable feeding sources, whereas in the wintering grounds, where food resources are widespread, we did not find such aggregation patterns, except for the black kite, which aggregated at desert locust outbreaks. The comparison of diets in both species through stable isotopes revealed that their diets overlapped during summering, but not during wintering. CONCLUSIONS/SIGNIFICANCE: Our results suggest that bird sociality at feeding grounds is closely linked to the pattern of spatial distribution and predictability of trophic resources, which are ultimately induced by human activities. Migrant species can show adaptive foraging strategies to face changing distribution of food availability in both wintering and summering quarters. Understanding these effects is a key aspect for predicting the fitness costs and population consequences of habitat transformations on the viability of endangered migratory species.

  2. Optimal distribution of incentives for public cooperation in heterogeneous interaction environments

    Directory of Open Access Journals (Sweden)

    Xiaojie eChen

    2014-07-01

    Full Text Available In the framework of evolutionary games with institutional reciprocity, limited incentives are at disposal for rewarding cooperators and punishing defectors. In the simplest case, it can be assumed that, depending on their strategies, all players receive equal incentives from the common pool. The question arises, however, what is the optimal distribution of institutional incentives? How should we best reward and punish individuals for cooperation to thrive? We study this problem for the public goods game on a scale-free network. We show that if the synergetic effects of group interactions are weak, the level of cooperation in the population can be maximized simply by adopting the simplest ''equal distribution'' scheme. If synergetic effects are strong, however, it is best to reward high-degree nodes more than low-degree nodes. These distribution schemes for institutional rewards are independent of payoff normalization. For institutional punishment, however, the same optimization problem is more complex, and its solution depends on whether absolute or degree-normalized payoffs are used. We find that degree-normalized payoffs require high-degree nodes be punished more lenient than low-degree nodes. Conversely, if absolute payoffs count, then high-degree nodes should be punished stronger than low-degree nodes.

  3. Isotoxic dose escalation in the treatment of lung cancer by means of heterogeneous dose distributions in the presence of respiratory motion

    DEFF Research Database (Denmark)

    Baker, Mariwan; Nielsen, Morten; Hansen, Olfred

    2011-01-01

    To test, in the presence of intrafractional respiration movement, a margin recipe valid for a homogeneous and conformal dose distribution and to test whether the use of smaller margins combined with heterogeneous dose distributions allows an isotoxic dose escalation when respiratory motion...

  4. Evidence for a Heterogeneous Distribution of Water in the Martian Interior

    Science.gov (United States)

    McCubbin, Francis; Boyce, Jeremy W.; Srinvasan, Poorna; Santos, Alison R.; Elardo, Stephen M.; Filiberto, Justin; Steele, Andrew; Shearer, Charles K.

    2016-01-01

    The abundance and distribution of H2O within the terrestrial planets, as well as its timing of delivery, is a topic of vital importance for understanding the chemical and physical evolution of planets and their potential for hosting habitable environments. Analysis of planetary materials from Mars, the Moon, and the eucrite parent body (i.e., asteroid 4Vesta) have confirmed the presence of H2O within their interiors. Moreover, H and N isotopic data from these planetary materials suggests H2O was delivered to the inner solar system very early from a common source, similar in composition to the carbonaceous chondrites. Despite the ubiquity of H2O in the inner Solar System, the only destination with any prospects for past or present habitable environments at this time, outside of the Earth, is Mars. Although the presence of H2O within the martian interior has been confirmed, very little is known regarding its abundance and distribution within the martian interior and how the martian water inventory has changed over time. By combining new analyses of martian apatites within a large number of martian meteorite types with previously published volatile data and recently determined mineral-melt partition coefficients for apatite, we report new insights into the abundance and distribution of volatiles in the martian crust and mantle. Using the subset of samples that did not exhibit crustal contamination, we determined that the enriched shergottite mantle source has 36-73 ppm H2O and the depleted shergottite mantle source has 14-23 ppm H2O. This result is consistent with other observed geochemical differences between enriched and depleted shergottites and supports the idea that there are at least two geochemically distinct reservoirs in the martian mantle. We also estimated the H2O content of the martian crust using the revised mantle H2O abundances and known crust-mantle distributions of incompatible lithophile elements. We determined that the bulk martian crust has

  5. Availability and temporal heterogeneity of water supply affect the vertical distribution and mortality of a belowground herbivore and consequently plant growth.

    Science.gov (United States)

    Tsunoda, Tomonori; Kachi, Naoki; Suzuki, Jun-Ichirou

    2014-01-01

    We examined how the volume and temporal heterogeneity of water supply changed the vertical distribution and mortality of a belowground herbivore, and consequently affected plant biomass. Plantago lanceolata (Plantaginaceae) seedlings were grown at one per pot under different combinations of water volume (large or small volume) and heterogeneity (homogeneous water conditions, watered every day; heterogeneous conditions, watered every 4 days) in the presence or absence of a larva of the belowground herbivorous insect, Anomala cuprea (Coleoptera: Scarabaeidae). The larva was confined in different vertical distributions to top feeding zone (top treatment), middle feeding zone (middle treatment), or bottom feeding zone (bottom treatment); alternatively no larva was introduced (control treatment) or larval movement was not confined (free treatment). Three-way interaction between water volume, heterogeneity, and the herbivore significantly affected plant biomass. With a large water volume, plant biomass was lower in free treatment than in control treatment regardless of heterogeneity. Plant biomass in free treatment was as low as in top treatment. With a small water volume and in free treatment, plant biomass was low (similar to that under top treatment) under homogeneous water conditions but high under heterogeneous ones (similar to that under middle or bottom treatment). Therefore, there was little effect of belowground herbivory on plant growth under heterogeneous water conditions. In other watering regimes, herbivores would be distributed in the shallow soil and reduced root biomass. Herbivore mortality was high with homogeneous application of a large volume or heterogeneous application of a small water volume. Under the large water volume, plant biomass was high in pots in which the herbivore had died. Thus, the combinations of water volume and heterogeneity affected plant growth via the change of a belowground herbivore.

  6. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  7. Modeling species distributions from heterogeneous data for the biogeographic regionalization of the European bryophyte flora.

    Directory of Open Access Journals (Sweden)

    Rubén G Mateo

    Full Text Available The definition of biogeographic regions provides a fundamental framework for a range of basic and applied questions in biogeography, evolutionary biology, systematics and conservation. Previous research suggested that environmental forcing results in highly congruent regionalization patterns across taxa, but that the size and number of regions depends on the dispersal ability of the taxa considered. We produced a biogeographic regionalization of European bryophytes and hypothesized that (1 regions defined for bryophytes would differ from those defined for other taxa due to the highly specific eco-physiology of the group and (2 their high dispersal ability would result in the resolution of few, large regions. Species distributions were recorded using 10,000 km2 MGRS pixels. Because of the lack of data across large portions of the area, species distribution models employing macroclimatic variables as predictors were used to determine the potential composition of empty pixels. K-means clustering analyses of the pixels based on their potential species composition were employed to define biogeographic regions. The optimal number of regions was determined by v-fold cross-validation and Moran's I statistic. The spatial congruence of the regions identified from their potential bryophyte assemblages with large-scale vegetation patterns is at odds with our primary hypothesis. This reinforces the notion that post-glacial migration patterns might have been much more similar in bryophytes and vascular plants than previously thought. The substantially lower optimal number of clusters and the absence of nested patterns within the main biogeographic regions, as compared to identical analyses in vascular plants, support our second hypothesis. The modelling approach implemented here is, however, based on many assumptions that are discussed but can only be tested when additional data on species distributions become available, highlighting the substantial

  8. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  9. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Murazaki, Minoru; Uno, Yuichi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of {+-}13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, {alpha}, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and {sup 3}He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, calculated value agreed well with measurement data of PSPC without Cd cover. (author)

  10. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    CERN Document Server

    Murazaki, M; Uno, Y

    2003-01-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of +-13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, alpha, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and sup 3 He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, ...

  11. Micro-macro model for prediction of local temperature distribution in heterogeneous and two-phase media

    Directory of Open Access Journals (Sweden)

    Furmański Piotr

    2014-09-01

    Full Text Available Heat flow in heterogeneous media with complex microstructure follows tortuous path and therefore determination of temperature distribution in them is a challenging task. Two-scales, micro-macro model of heat conduction with phase change in such media was considered in the paper. A relation between temperature distribution on the microscopic level, i.e., on the level of details of microstructure, and the temperature distribution on the macroscopic level, i.e., on the level where the properties were homogenized and treated as effective, was derived. The expansion applied to this relation allowed to obtain its more simplified, approximate form corresponding to separation of micro- and macro-scales. Then the validity of this model was checked by performing calculations for 2D microstructure of a composite made of two constituents. The range of application of the proposed micro-macro model was considered in transient states of heat conduction both for the case when the phase change in the material is present and when it is absent. Variation of the effective thermal conductivity with time was considered and a criterion was found for which application of the considered model is justified.

  12. The Earth's mantle in a microwave oven: thermal convection driven by a heterogeneous distribution of heat sources

    Science.gov (United States)

    Fourel, Loïc; Limare, Angela; Jaupart, Claude; Surducan, Emanoil; Farnetani, Cinzia G.; Kaminski, Edouard C.; Neamtu, Camelia; Surducan, Vasile

    2017-08-01

    Convective motions in silicate planets are largely driven by internal heat sources and secular cooling. The exact amount and distribution of heat sources in the Earth are poorly constrained and the latter is likely to change with time due to mixing and to the deformation of boundaries that separate different reservoirs. To improve our understanding of planetary-scale convection in these conditions, we have designed a new laboratory setup allowing a large range of heat source distributions. We illustrate the potential of our new technique with a study of an initially stratified fluid involving two layers with different physical properties and internal heat production rates. A modified microwave oven is used to generate a uniform radiation propagating through the fluids. Experimental fluids are solutions of hydroxyethyl cellulose and salt in water, such that salt increases both the density and the volumetric heating rate. We determine temperature and composition fields in 3D with non-invasive techniques. Two fluorescent dyes are used to determine temperature. A Nd:YAG planar laser beam excites fluorescence, and an optical system, involving a beam splitter and a set of colour filters, captures the fluorescence intensity distribution on two separate spectral bands. The ratio between the two intensities provides an instantaneous determination of temperature with an uncertainty of 5% (typically 1K). We quantify mixing processes by precisely tracking the interfaces separating the two fluids. These novel techniques allow new insights on the generation, morphology and evolution of large-scale heterogeneities in the Earth's lower mantle.

  13. DEVELOPMENT OF A HETEROGENIC DISTRIBUTED ENVIRONMENT FOR SPATIAL DATA PROCESSING USING CLOUD TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. S. Garov

    2016-06-01

    Full Text Available We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  14. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  15. Individual choice and reputation distribution of cooperative behaviors among heterogeneous groups

    International Nuclear Information System (INIS)

    Lu, Peng

    2015-01-01

    Highlights: •Cooperation macrocosmically refers to the overall cooperation rate, while reputation microcosmically records individual choices. •Therefore, reputation should be preferred in order to investigate how individual choices evolve. •Both the mean and standard deviation of reputation follow clear patterns, and some factors have quadratic effects on them. -- Abstract: Cooperation is vital for our society, but the temptation of cheating on cooperative partners undermines cooperation. The mechanism of reputation is raised to countervail this temptation and therefore promote cooperation. Reputation microcosmically records individual choices, while cooperation macrocosmically refers to the group or averaged cooperation level. Reputation should be preferred in order to investigate how individual choices evolve. In this work, we study the distribution of reputation to figure out how individuals make choices within cooperation and defection. We decompose reputation into its mean and standard deviation and inspect effects of their factors respectively. To achieve this goal, we construct a model where agents of three groups or classes play the prisoners’ dilemma game with neighbors on a square lattice. It indicates in outcomes that the distribution of reputation is distinct from that of cooperation and both the mean and standard deviation of reputation follow clear patterns. Some factors have negative quadratic effects on reputation's mean or standard deviation, and some have merely linear effects

  16. Determining space-energy distribution of thermal neutrons in heterogeneous cylindrically symmetric reactor cell, Master Thesis

    International Nuclear Information System (INIS)

    Matausek, M. V.

    1966-06-01

    A combination of multigroup method and P 3 approximation of spherical harmonics method was chosen for calculating space-energy distribution of thermal neutron flux in elementary reactor cell. Application of these methods reduced solution of complicated transport equation to the problem of solving an inhomogeneous system of six ordinary firs-order differential equations. A procedure is proposed which avoids numerical solution and enables analytical solution when applying certain approximations. Based on this approach, computer codes were written for ZUSE-Z-23 computer: SIGMA code for calculating group constants for a given material; MULTI code which uses results of SIGMA code as input and calculates spatial ana energy distribution of thermal neutron flux in a reactor cell. Calculations of thermal neutron spectra for a number of reactor cells were compared to results available from literature. Agreement was satisfactory in all the cases, which proved the correctness of the applied method. Some possibilities for improving the precision and acceleration of the calculation process were found during calculation. (author)

  17. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    Science.gov (United States)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  18. Bench-mark experiments to study the neutron distribution in a heterogeneous reactor shielding

    International Nuclear Information System (INIS)

    Bolyatko, V.V.; Vyrskij, M.Yu.; Mashkovich, V.P.; Nagaev, R.Kh.; Prit'mov, A.P.; Sakharov, V.K.; Troshin, V.S.; Tikhonov, E.G.

    1981-01-01

    The bench-mark experiments performed at the B-2 facility of the BR-10 reactor to investigate the spatial and energy neutron distributions are described. The experimental facility includes the neutron beam channel with a slide, a mo shielding composition investigated consisted of sequential layers of steel (1KH18N9T) and graphite slabs. The neutron spectra were measured by activation method, a set of treshold and resonance detectors having been used. The detectors made it possible to obtain the absolute neutron spectra in the 1.4 eV-10 MeV range. The comparison of calculations with the results of the bench-mark experiments made it possible to prove the neutron transport calculational model realized in the ROZ-9 and ARAMAKO-2F computer codes and evaluate the validity of the ARAMAKO constants for the class of shielding compositions in question [ru

  19. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  20. An improved energy aware distributed unequal clustering protocol for heterogeneous wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Vrinda Gupta

    2016-06-01

    Full Text Available In this paper, an improved version of the energy aware distributed unequal clustering protocol (EADUC is projected. The EADUC protocol is commonly used for solving energy hole problem in multi-hop wireless sensor networks. In the EADUC, location of base station and residual energy are given importance as clustering parameters. Based on these parameters, different competition radii are assigned to nodes. Herein, a new approach has been proposed to improve the working of EADUC, by electing cluster heads considering number of nodes in the neighborhood in addition to the above two parameters. The inclusion of the neighborhood information for computation of the competition radii provides better balancing of energy in comparison with the existing approach. Furthermore, for the selection of next hop node, the relay metric is defined directly in terms of energy expense instead of only the distance information used in the EADUC and the data transmission phase has been extended in every round by performing the data collection number of times through use of major slots and mini-slots. The methodology used is of retaining the same clusters for a few rounds and is effective in reducing the clustering overhead. The performance of the proposed protocol has been evaluated under three different scenarios and compared with existing protocols through simulations. The results show that the proposed scheme outperforms the existing protocols in terms of network lifetime in all the scenarios.

  1. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  2. The spatial heterogeneity between Japanese encephalitis incidence distribution and environmental variables in Nepal.

    Directory of Open Access Journals (Sweden)

    Daniel E Impoinvil

    Full Text Available To identify potential environmental drivers of Japanese Encephalitis virus (JE transmission in Nepal, we conducted an ecological study to determine the spatial association between 2005 Nepal JE incidence, and climate, agricultural, and land-cover variables at district level.District-level data on JE cases were examined using Local Indicators of Spatial Association (LISA analysis to identify spatial clusters from 2004 to 2008 and 2005 data was used to fit a spatial lag regression model with climate, agriculture and land-cover variables.Prior to 2006, there was a single large cluster of JE cases located in the Far-West and Mid-West terai regions of Nepal. After 2005, the distribution of JE cases in Nepal shifted with clusters found in the central hill areas. JE incidence during the 2005 epidemic had a stronger association with May mean monthly temperature and April mean monthly total precipitation compared to mean annual temperature and precipitation. A parsimonious spatial lag regression model revealed, 1 a significant negative relationship between JE incidence and April precipitation, 2 a significant positive relationship between JE incidence and percentage of irrigated land 3 a non-significant negative relationship between JE incidence and percentage of grassland cover, and 4 a unimodal non-significant relationship between JE Incidence and pig-to-human ratio.JE cases clustered in the terai prior to 2006 where it seemed to shift to the Kathmandu region in subsequent years. The spatial pattern of JE cases during the 2005 epidemic in Nepal was significantly associated with low precipitation and the percentage of irrigated land. Despite the availability of an effective vaccine, it is still important to understand environmental drivers of JEV transmission since the enzootic cycle of JEV transmission is not likely to be totally interrupted. Understanding the spatial dynamics of JE risk factors may be useful in providing important information to the

  3. Effect of heterogeneous distribution of crosslink density on physical properties of radiation vulcanized NR (Natural Rubber) latex film

    International Nuclear Information System (INIS)

    Keizo Makuuchi; Fumio Yoshii; Miura, H.; Murakami, K.

    1996-01-01

    Thus a study has been carried out to investigate the effect of particle to particle variation in crosslink density on physical properties of radiation vulcanized NR latex film. NR latex was irradiated in small bottle by γ rays without vulcanization accelerator to provide latex rubber particles having homogeneous distribution of crosslink density. The doses were 30, 50, 100, 250, 300, 400, 500 and 600 kGy. Weight swelling ratio, gel fraction, tensile strength and elongation at break of the latex film from the mixed latex were measured. The vulcanization dose of this latex was 250 kGy. Then the two different latexes were mixed in a such way to adjust the average dose of 250 kGy to prepare a latex consisting of rubber particles having heterogeneous distribution of crosslink density. Tensile strength of the latex film was depressed by mixing. The reduction increased with increasing the decrease of gel fraction by mixing. However the reduction was not serious when the dose difference of two latexes was less than 200 kGy

  4. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Determination of the relative power density distribution in a heterogeneous reactor from the results of measurements of the reactivity effects and the neutron importance function

    International Nuclear Information System (INIS)

    Bobrov, A. A.; Glushkov, E. S.; Zimin, A. A.; Kapitonova, A. V.; Kompaniets, G. V.; Nosov, V. I.; Petrushenko, R. P.; Smirnov, O. N.

    2012-01-01

    A method for experimental determination of the relative power density distribution in a heterogeneous reactor based on measurements of fuel reactivity effects and importance of neutrons from a californium source is proposed. The method was perfected on two critical assembly configurations at the NARCISS facility of the Kurchatov Institute, which simulated a small-size heterogeneous nuclear reactor. The neutron importance measurements were performed on subcritical and critical assemblies. It is shown that, along with traditionally used activation methods, the developed method can be applied to experimental studies of special features of the power density distribution in critical assemblies and reactors.

  6. 146Sm-142Nd systematics measured in enstatite chondrites reveals a heterogeneous distribution of 142Nd in the solar nebula.

    Science.gov (United States)

    Gannoun, Abdelmouhcine; Boyet, Maud; Rizo, Hanika; El Goresy, Ahmed

    2011-05-10

    The short-lived (146)Sm-(142)Nd chronometer (T(1/2) = 103 Ma) is used to constrain the early silicate evolution of planetary bodies. The composition of bulk terrestrial planets is then considered to be similar to that of primitive chondrites that represent the building blocks of rocky planets. However for many elements chondrites preserve small isotope differences. In this case it is not always clear to what extent these variations reflect the isotope heterogeneity of the protosolar nebula rather than being produced by the decay of parent isotopes. Here we present Sm-Nd isotopes data measured in a comprehensive suite of enstatite chondrites (EC). The EC preserve (142)Nd/(144)Nd ratios that range from those of ordinary chondrites to values similar to terrestrial samples. The EC having terrestrial (142)Nd/(144)Nd ratios are also characterized by small (144)Sm excesses, which is a pure p-process nuclide. The correlation between (144)Sm and (142)Nd for chondrites may indicate a heterogeneous distribution in the solar nebula of p-process matter synthesized in supernovae. However to explain the difference in (142)Nd/(144)Nd ratios, 20% of the p-process contribution to (142)Nd is required, at odds with the value of 4% currently proposed in stellar models. This study highlights the necessity of obtaining high-precision (144)Sm measurements to interpret properly measured (142)Nd signatures. Another explanation could be that the chondrites sample material formed in different pulses of the lifetime of asymptotic giant branch stars. Then the isotope signature measured in SiC presolar would not represent the unique s-process signature of the material present in the solar nebula during accretion.

  7. Analysis of Java Distributed Architectures in Designing and Implementing a Client/Server Database System

    National Research Council Canada - National Science Library

    Akin, Ramis

    1998-01-01

    .... Information is scattered throughout organizations and must be easily accessible. A new solution is needed for effective and efficient management of data in today's distributed client/server environment...

  8. Evaluation of the Accuracy of Polymer Gels for Determining Electron Dose Distributions in the Presence of Small Heterogeneities.

    Science.gov (United States)

    Asl, R Ghahraman; Nedaie, H A; Banaee, N

    2017-12-01

    The aim of this study is to evaluate the application and accuracy of polymer gels for determining electron dose distributions in the presence of small heterogeneities made of bone and air. Different cylindrical phantoms containing MAGIC (Methacrylic and Ascorbic acid in Gelatin Initiated by Copper) normoxic polymer gel were used under the slab phantoms during irradiation. MR images of the irradiated gel phantoms were obtained to determine their R2 (spin-spin) relaxation maps for conversion to absorbed dose. One- and 2-dimensional lateral dose profiles were acquired at depths of 1 and 4 cm for 8 and 15 MeV electron beams. The results were compared with the doses measured by a diode detector at the same positions. In addition, the dose distribution in the axial orientation was measured by the gel dosimeter. The slope and intercept for the R2 versus dose curve were 0.509 ± 0.002 Gy s and 4.581 ± 0.005 s, respectively. No significant variation in dose-R2 response was seen for the two electron energies within the applied dose ranges. The mean dose difference between the measured gel dose profiles was smaller than 3% compared to those measured by the diode detector. These results provide further demonstration that electron dose distributions are significantly altered in the presence of tissue inhomogeneities such as bone and air cavity and that MAGIC gel is a useful tool for 3-dimensional dose visualization and qualitative assessment of tissue inhomogeneity effects in electron beam dosimetry.

  9. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    Science.gov (United States)

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  10. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  11. A distributed atomic physics database and modeling system for plasma spectroscopy

    International Nuclear Information System (INIS)

    Nash, J.K.; Liedahl, D.; Chen, M.H.; Iglesias, C.A.; Lee, R.W.; Salter, J.M.

    1995-08-01

    We are undertaking to develop a set of computational capabilities which will facilitate the access, manipulation, and understanding of atomic data in calculations of x-ray spectral modeling. In this present limited description we will emphasize the objectives for this work, the design philosophy, and aspects of the atomic database, as a more complete description of this work is available. The project is referred to as the Plasma Spectroscopy Initiative; the computing environment is called PSI, or the ''PSI shell'' since the primary interface resembles a UNIX shell window. The working group consists of researchers in the fields of x-ray plasma spectroscopy, atomic physics, plasma diagnostics, line shape theory, astrophysics, and computer science. To date, our focus has been to develop the software foundations, including the atomic physics database, and to apply the existing capabilities to a range of working problems. These problems have been chosen in part to exercise the overall design and implementation of the shell. For successful implementation the final design must have great flexibility since our goal is not simply to satisfy our interests but to vide a tool of general use to the community

  12. Mars Global Digital Dune Database (MGD3): Global dune distribution and wind pattern observations

    Science.gov (United States)

    Hayward, Rosalyn K.; Fenton, Lori; Titus, Timothy N.

    2014-01-01

    The Mars Global Digital Dune Database (MGD3) is complete and now extends from 90°N to 90°S latitude. The recently released south pole (SP) portion (MC-30) of MGD3 adds ∼60,000 km2 of medium to large-size dark dune fields and ∼15,000 km2 of sand deposits and smaller dune fields to the previously released equatorial (EQ, ∼70,000 km2), and north pole (NP, ∼845,000 km2) portions of the database, bringing the global total to ∼975,000 km2. Nearly all NP dunes are part of large sand seas, while the majority of EQ and SP dune fields are individual dune fields located in craters. Despite the differences between Mars and Earth, their dune and dune field morphologies are strikingly similar. Bullseye dune fields, named for their concentric ring pattern, are the exception, possibly owing their distinctive appearance to winds that are unique to the crater environment. Ground-based wind directions are derived from slipface (SF) orientation and dune centroid azimuth (DCA), a measure of the relative location of a dune field inside a crater. SF and DCA often preserve evidence of different wind directions, suggesting the importance of local, topographically influenced winds. In general however, ground-based wind directions are broadly consistent with expected global patterns, such as polar easterlies. Intriguingly, between 40°S and 80°S latitude both SF and DCA preserve their strongest, though different, dominant wind direction, with transport toward the west and east for SF-derived winds and toward the north and west for DCA-derived winds.

  13. Geographical distribution of centenarians in Colombia: an analysis of three databases

    Directory of Open Access Journals (Sweden)

    Diego Rosselli

    2017-07-01

    Conclusions: Although the results are consistent with the number and geographical distribution of centenarians, some errors may be found in the date of birth stated in the records, which is the basis for estimating age in the three sources. Other factors potentially involved in the results may be physical activity, family and community support, low stress and healthy diet in these regions.

  14. Intratumor heterogeneous distribution of 10B-compounds suggested by the radiobiological findings from in vivo mouse studies

    International Nuclear Information System (INIS)

    Masunaga, S.; Ono, K.; Sakurai, Y.; Takagaki, M.; Kobayashi, T.; Kinashi, Y.; Akaboshi, M.; Akuta, K.

    2000-01-01

    After continuous labeling with or without 5-bromo-2'-deoxyuridine (BrdU), SCC VII tumor-bearing mice received one of the following treatments in vivo; 1) Tumor excision right after thermal neutron irradiation following sodium borocaptate- 10 B (BSH) or p-boronophenylalanine- 10 B (BPA) administration. 2) Tumor excision 5 min through 72 h after thermal neutron or γ-ray irradiation. 3) Determination of hypoxic fraction (HF) of implanted tumors by γ-ray test irradiation 5 min through 72 h after thermal neutron or γ-ray irradiation. 4) Determination of the tumor sensitivity to γ-rays 0-24 h after thermal neutron or γ-ray irradiation. The following results were obtained; 1) BSH and BPA sensitized quiescent (Q) and total (proliferating (P) + Q) tumor cells, respectively, and the use of 10 B-compound, especially BPA, widened the sensitivity difference between Q and total cells. 2) The use of 10 B-compound, especially BPA, increased the repair capacity from potentially lethal damage (PLDR) and induced PLDR pattern like post-γ-ray irradiation. 3) Reoxygenation after thermal neutron irradiation following 10 B-compound, especially BPA, administration occurred slowly, compared with after neutron irradiation only and looked like after γ-ray irradiation. 4) The use of 10 B-compound, especially BPA, promoted sublethal damage repair (SLDR) in total cells and the recruitment from Q to P state, compared with after thermal neutron irradiation alone. All these findings suggested the difficulty in distribution of 10 B-compound, especially BPA, in Q cells and the heterogeneity in intratumor distribution of 10 B-compound. (author)

  15. Heterogeneous Gossip

    Science.gov (United States)

    Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien

    Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.

  16. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  17. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  18. Pediatric Vital Sign Distribution Derived From a Multi-Centered Emergency Department Database

    Directory of Open Access Journals (Sweden)

    Robert J. Sepanski

    2018-03-01

    Full Text Available BackgroundWe hypothesized that current vital sign thresholds used in pediatric emergency department (ED screening tools do not reflect observed vital signs in this population. We analyzed a large multi-centered database to develop heart rate (HR and respiratory rate centile rankings and z-scores that could be incorporated into electronic health record ED screening tools and we compared our derived centiles to previously published centiles and Pediatric Advanced Life Support (PALS vital sign thresholds.MethodsInitial HR and respiratory rate data entered into the Cerner™ electronic health record at 169 participating hospitals’ ED over 5 years (2009 through 2013 as part of routine care were analyzed. Analysis was restricted to non-admitted children (0 to <18 years. Centile curves and z-scores were developed using generalized additive models for location, scale, and shape. A split-sample validation using two-thirds of the sample was compared with the remaining one-third. Centile values were compared with results from previous studies and guidelines.ResultsHR and RR centiles and z-scores were determined from ~1.2 million records. Empirical 95th centiles for HR and respiratory rate were higher than previously published results and both deviated from PALS guideline recommendations.ConclusionHeart and respiratory rate centiles derived from a large real-world non-hospitalized ED pediatric population can inform the modification of electronic and paper-based screening tools to stratify children by the degree of deviation from normal for age rather than dichotomizing children into groups having “normal” versus “abnormal” vital signs. Furthermore, these centiles also may be useful in paper-based screening tools and bedside alarm limits for children in areas other than the ED and may establish improved alarm limits for bedside monitors.

  19. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (4)

    International Nuclear Information System (INIS)

    Suyama, Tadahiro; Tachi, Yukio; Ganter, Charlotte; Kunze, Susanne; Ochs, Michael

    2011-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite, mudstone, granite and Fe-oxide/hydroxide, Al-oxide/hydroxide has been already evaluated. These QA information has been made available to access through the web-based JAEA-SDB since March, 2009. In this report, the QA/classification of selected entries in the JAEA-SDB, focusing on key radionuclides (Th, Np, Am, Se and Cs) sorption on tuff existing widely in geological environment, was done following the approach/guideline defined in our previous report. As a result, the reliability of 560 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  20. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (3)

    International Nuclear Information System (INIS)

    Ochs, Michael; Kunze, Susanne; Suyama, Tadahiro; Tachi, Yukio; Yui, Mikazu

    2010-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite and mudstone system has been already evaluated. To use these QA information, the new web-based JAEA-SDB was published in March, 2009. In this report, the QA/classification of selected entries for key radionuclides (Th, Np, Am, Se and Cs) in the JAEA-SDB was done following the approach/guideline defined in our previous report focusing granite rocks which are related to reference systems in H12 PA and possible applications in the context of URL activities, and Fe-oxide/hydroxide, Al-oxide/hydroxide existing widely in geological environment. As a result, the reliability of 1,373 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  1. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  2. USBombus, a database of contemporary survey data for North American Bumble Bees (Hymenoptera, Apidae, Bombus) distributed in the United States.

    Science.gov (United States)

    Koch, Jonathan B; Lozier, Jeffrey; Strange, James P; Ikerd, Harold; Griswold, Terry; Cordes, Nils; Solter, Leellen; Stewart, Isaac; Cameron, Sydney A

    2015-01-01

    Bumble bees (Hymenoptera: Apidae, Bombus) are pollinators of wild and economically important flowering plants. However, at least four bumble bee species have declined significantly in population abundance and geographic range relative to historic estimates, and one species is possibly extinct. While a wealth of historic data is now available for many of the North American species found to be in decline in online databases, systematic survey data of stable species is still not publically available. The availability of contemporary survey data is critically important for the future monitoring of wild bumble bee populations. Without such data, the ability to ascertain the conservation status of bumble bees in the United States will remain challenging. This paper describes USBombus, a large database that represents the outcomes of one of the largest standardized surveys of bumble bee pollinators (Hymenoptera, Apidae, Bombus) globally. The motivation to collect live bumble bees across the United States was to examine the decline and conservation status of Bombus affinis, B. occidentalis, B. pensylvanicus, and B. terricola. Prior to our national survey of bumble bees in the United States from 2007 to 2010, there have only been regional accounts of bumble bee abundance and richness. In addition to surveying declining bumble bees, we also collected and documented a diversity of co-occuring bumble bees. However we have not yet completely reported their distribution and diversity onto a public online platform. Now, for the first time, we report the geographic distribution of bumble bees reported to be in decline (Cameron et al. 2011), as well as bumble bees that appeared to be stable on a large geographic scale in the United States (not in decline). In this database we report a total of 17,930 adult occurrence records across 397 locations and 39 species of Bombus detected in our national survey. We summarize their abundance and distribution across the United States and

  3. Where the bugs are: analyzing distributions of bacterial phyla by descriptor keyword search in the nucleotide database.

    Science.gov (United States)

    Squartini, Andrea

    2011-07-26

    The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.

  4. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    International Nuclear Information System (INIS)

    Nagasaki, S.

    2013-01-01

    Based on the sorption distribution coefficients (K d ) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K d values of Cs onto Mizunami granite carried out by JAEA with the K d values predicted by the model, the effect of the ionic strength on the K d values of Cs onto granite was evaluated. It was found that K d values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10 -2 to 5 x 10 -1 mol/dm 3 . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  5. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    Energy Technology Data Exchange (ETDEWEB)

    Nagasaki, S., E-mail: nagasas@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

    2013-07-01

    Based on the sorption distribution coefficients (K{sub d}) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K{sub d} values of Cs onto Mizunami granite carried out by JAEA with the K{sub d} values predicted by the model, the effect of the ionic strength on the K{sub d} values of Cs onto granite was evaluated. It was found that K{sub d} values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10{sup -2} to 5 x 10{sup -1} mol/dm{sup 3} . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  6. Overview of the Benefits and Costs og Integrating Heterogeneous Applications by Using Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars

    2012-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). In practice, it is not possible to implement the ACID properties if heterogeneous or distributed databases ...

  7. Global spatiotemporal distribution of soil respiration modeled using a global database

    Science.gov (United States)

    Hashimoto, S.; Carvalhais, N.; Ito, A.; Migliavacca, M.; Nishina, K.; Reichstein, M.

    2015-07-01

    The flux of carbon dioxide from the soil to the atmosphere (soil respiration) is one of the major fluxes in the global carbon cycle. At present, the accumulated field observation data cover a wide range of geographical locations and climate conditions. However, there are still large uncertainties in the magnitude and spatiotemporal variation of global soil respiration. Using a global soil respiration data set, we developed a climate-driven model of soil respiration by modifying and updating Raich's model, and the global spatiotemporal distribution of soil respiration was examined using this model. The model was applied at a spatial resolution of 0.5°and a monthly time step. Soil respiration was divided into the heterotrophic and autotrophic components of respiration using an empirical model. The estimated mean annual global soil respiration was 91 Pg C yr-1 (between 1965 and 2012; Monte Carlo 95 % confidence interval: 87-95 Pg C yr-1) and increased at the rate of 0.09 Pg C yr-2. The contribution of soil respiration from boreal regions to the total increase in global soil respiration was on the same order of magnitude as that of tropical and temperate regions, despite a lower absolute magnitude of soil respiration in boreal regions. The estimated annual global heterotrophic respiration and global autotrophic respiration were 51 and 40 Pg C yr-1, respectively. The global soil respiration responded to the increase in air temperature at the rate of 3.3 Pg C yr-1 °C-1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. The estimates are based on a semi-empirical model parameterized with over one thousand data points. Our analysis indicates that the climate controls on soil respiration may translate into an increasing trend in global soil respiration and our analysis emphasizes the relevance of the soil carbon flux from soil to

  8. The CTTC 5G End-to-End Experimental Platform : Integrating Heterogeneous Wireless/Optical Networks, Distributed Cloud, and IoT Devices

    OpenAIRE

    Munoz, Raul; Mangues-Bafalluy, Josep; Vilalta, Ricard; Verikoukis, Christos; Alonso-Zarate, Jesus; Bartzoudis, Nikolaos; Georgiadis, Apostolos; Payaro, Miquel; Perez-Neira, Ana; Casellas, Ramon; Martinez, Ricardo; Nunez-Martinez, Jose; Requena Esteso, Manuel; Pubill, David; Font-Bach, Oriol

    2016-01-01

    The Internet of Things (IoT) will facilitate a wide variety of applications in different domains, such as smart cities, smart grids, industrial automation (Industry 4.0), smart driving, assistance of the elderly, and home automation. Billions of heterogeneous smart devices with different application requirements will be connected to the networks and will generate huge aggregated volumes of data that will be processed in distributed cloud infrastructures. On the other hand, there is also a gen...

  9. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  10. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  11. Study of the heterogeneities effect in the dose distributions of Leksell Gamma Knife (R), through Monte Carlo simulation

    International Nuclear Information System (INIS)

    Rojas C, E.L.; Al-Dweri, F.M.O.; Lallena R, A.M.

    2005-01-01

    In this work they are studied, by means of Monte Carlo simulation, the effects that take place in the dose profiles that are obtained with the Leksell Gamma Knife (R), when they are kept in account heterogeneities. The considered heterogeneities simulate the skull and the spaces of air that are in the head, like they can be the nasal breasts or the auditory conduits. The calculations were made using the Monte Carlo Penelope simulation code (v. 2003). The geometry of each one of the 201 sources that this instrument is composed, as well as of the corresponding channels of collimation of the Gamma Knife (R), it was described by means of a simplified model of geometry that has been recently studied. The obtained results when they are kept in mind the heterogeneities they present non worthless differences regarding those obtained when those are not considered. These differences are maximum in the proximities of the interfaces among different materials. (Author)

  12. A Novel Energy-Aware Distributed Clustering Algorithm for Heterogeneous Wireless Sensor Networks in the Mobile Environment.

    Science.gov (United States)

    Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong

    2015-12-10

    In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network's running and the degree of candidate nodes' effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime.

  13. Statistical Analysis of Spatiotemporal Heterogeneity of the Distribution of Air Quality and Dominant Air Pollutants and the Effect Factors in Qingdao Urban Zones

    Directory of Open Access Journals (Sweden)

    Xiangwei Zhao

    2018-04-01

    Full Text Available Air pollution has impacted people’s lives in urban China, and the analysis of the distribution and driving factors behind air quality has become a current research focus. In this study, the temporal heterogeneity of air quality (AQ and the dominant air pollutants across the four seasons were analyzed based on the Kruskal-Wallis rank-sum test method. Then, the spatial heterogeneity of AQ and the dominant air pollutants across four sites were analyzed based on the Wilcoxon signed-rank test method. Finally, the copula model was introduced to analyze the effect of relative factors on dominant air pollutants. The results show that AQ and dominant air pollutants present significant spatiotemporal heterogeneity in the study area. AQ is worst in winter and best in summer. PM10, O3, and PM2.5 are the dominant air pollutants in spring, summer, and winter, respectively. The average concentration of dominant air pollutants presents significant and diverse daily peaks and troughs across the four sites. The main driving factors are pollutants such as SO2, NO2, and CO, so pollutant emission reduction is the key to improving air quality. Corresponding pollution control measures should account for this heterogeneity in terms of AQ and the dominant air pollutants among different urban zones.

  14. Spatial and temporal distribution of solute leaching in heterogeneous soils: analysis and application to multisampler lysimeter data

    NARCIS (Netherlands)

    Rooij, de G.H.; Stagnitti, F.

    2002-01-01

    Accurate assessment of the fate of salts, nutrients, and pollutants in natural, heterogeneous soils requires a proper quantification of both spatial and temporal solute spreading during solute movement. The number of experiments with multisampler devices that measure solute leaching as a function of

  15. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  16. Multi-slice MRI reveals heterogeneity in disease distribution along the length of muscle in Duchenne muscular dystrophy.

    Science.gov (United States)

    Chrzanowski, Stephen M; Baligand, Celine; Willcocks, Rebecca J; Deol, Jasjit; Schmalfuss, Ilona; Lott, Donovan J; Daniels, Michael J; Senesac, Claudia; Walter, Glenn A; Vandenborne, Krista

    2017-09-01

    Duchenne muscular dystrophy (DMD) causes progressive pathologic changes to muscle secondary to a cascade of inflammation, lipid deposition, and fibrosis. Clinically, this manifests as progressive weakness, functional loss, and premature mortality. Though insult to whole muscle groups is well established, less is known about the relationship between intramuscular pathology and function. Differences of intramuscular heterogeneity across muscle length were assessed using an ordinal MRI grading scale in lower leg muscles of boys with DMD and correlated to patient's functional status. Cross sectional T 1 weighted MRI images with fat suppression were obtained from ambulatory boys with DMD. Six muscles (tibialis anterior, extensor digitorum longus, peroneus, soleus, medial and lateral gastrocnemii) were graded using an ordinal grading scale over 5 slice sections along the lower leg length. The scores from each slice were combined and results were compared to global motor function and age. Statistically greater differences of involvement were observed at the proximal ends of muscle compared to the midbellies. Multi-slice assessment correlated significantly to age and the Vignos functional scale, whereas single-slice assessment correlated to the Vignos functional scale only. Lastly, differential disease involvement of whole muscle groups and intramuscular heterogeneity were observed amongst similar age subjects. A multi-slice ordinal MRI grading scale revealed that muscles are not uniformly affected, with more advanced disease visible near the tendons in a primarily ambulatory population with DMD. A geographically comprehensive evaluation of the heterogeneously affected muscle in boys with DMD may more accurately assess disease involvement.

  17. Implications of heterogeneous fracture distribution on reservoir quality; an analogue from the Torridon Group sandstone, Moine Thrust Belt, NW Scotland

    Science.gov (United States)

    Watkins, Hannah; Healy, David; Bond, Clare E.; Butler, Robert W. H.

    2018-03-01

    Understanding fracture network variation is fundamental in characterising fractured reservoirs. Simple relationships between fractures, stress and strain are commonly assumed in fold-thrust structures, inferring relatively homogeneous fracture patterns. In reality fractures are more complex, commonly appearing as heterogeneous networks at outcrop. We use the Achnashellach Culmination (NW Scotland) as an outcrop analogue to a folded tight sandstone reservoir in a thrust belt. We present fracture data is collected from four fold-thrust structures to determine how fracture connectivity, orientation, permeability anisotropy and fill vary at different structural positions. We use a 3D model of the field area, constructed using field observations and bedding data, and geomechanically restored using Move software, to determine how factors such as fold curvature and strain influence fracture variation. Fracture patterns in the Torridon Group are consistent and predictable in high strain forelimbs, however in low strain backlimbs fracture patterns are inconsistent. Heterogeneities in fracture connectivity and orientation in low strain regions do not correspond to fluctuations in strain or fold curvature. We infer that where strain is low, other factors such as lithology have a greater control on fracture formation. Despite unpredictable fracture attributes in low strain regions, fractured reservoir quality would be highest here because fractures in high strain forelimbs are infilled with quartz. Heterogeneities in fracture attribute data on fold backlimbs mean that fractured reservoir quality and reservoir potential is difficult to predict.

  18. Service Oriented Integration of Distributed Heterogeneous IT Systems in Production Engineering Using Information Standards and Linked Data

    Directory of Open Access Journals (Sweden)

    Navid Shariat Zadeh

    2017-01-01

    Full Text Available While design of production systems based on digital models brings benefits, the communication of models comes with challenges since models typically reside in a heterogeneous IT environment using different syntax and semantics. Coping with heterogeneity requires a smart integration strategy. One main paradigm to integrate data and IT systems is to deploy information standards. In particular, ISO 10303 STEP has been endorsed as a suitable standard to exchange a wide variety of product manufacturing data. One the other hand, service-oriented tool integration solutions are progressively adopted for the integration of data and IT-tools, especially with the emergence of Open Services for Lifecycle Collaboration whose focus is on the linking of data from heterogeneous software tools. In practice, there should be a combination of these approaches to facilitate the integration process. Hence, the aim of this paper is to investigate the applications of the approaches and the principles behind them and try to find criteria for where to use which approach. In addition, we explore the synergy between them and consequently suggest an approach based on combination of them. In addition, a systematic approach is suggested to identify required level of integrations and their corresponding approaches exemplified in a typical IT system architecture in Production Engineering.

  19. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    Energy Technology Data Exchange (ETDEWEB)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D. [IRSN/DRPH/SDI/LEDI, BP 17, F-92 262 Fontenay-aux-Roses (France); Dudoignon, N. [IRSN/DRPH/SRBE/LRPAT, BP 17, F-92 262 Fontenay-aux-Roses (France); Rateau, S.; Van der Meeren, A.; Rouit, E. [CEA/DSV/DRR/SRCA/LRT BP no 12, F-91680 Bruyeres-le-Chatel (France); Bottlaender, M. [CEA/SHFJ, 4, place du General Leclerc F-91400 Orsay (France)

    2006-07-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  20. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    International Nuclear Information System (INIS)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D.; Dudoignon, N.; Rateau, S.; Van der Meeren, A.; Rouit, E.; Bottlaender, M.

    2006-01-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  1. Experimental study of the large-scale axially heterogeneous liquid-metal fast breeder reactor at the fast critical assembly: Power distribution measurements and their analyses

    International Nuclear Information System (INIS)

    Iijima, S.; Obu, M.; Hayase, T.; Ohno, A.; Nemoto, T.; Okajima, S.

    1988-01-01

    Power distributions of the large-scale axially heterogeneous liquid-metal fast breeder reactor were studied by using the experiment results of fast critical assemblies XI, XII, and XIII and the results of their analyses. The power distributions were examined by the gamma-scanning method and fission rate measurements using /sup 239/Pu and /sup 238/U fission counters and the foil irradiation method. In addition to the measurements in the reference core, the power distributions were measured in the core with a control rod inserted and in a modified core where the shape of the internal blanket was determined by the radial boundary. The calculation was made by using JENDL-2 and the Japan Atomic Energy Research Institute's standard calculation system for fast reactor neutronics. The power flattening trend, caused by the decrease of the fast neutron flux, was observed in the axial and radial power distributions. The effect of the radial boundary shape of the internal blanket on the power distribution was determined in the core. The thickness of the internal blanket was reduced at its radial boundary. The influence of the internal blanket was observed in the power distributions in the core with a control rod inserted. The calculation predicted the neutron spectrum harder in the internal blanket. In the radial distributions of /sup 239/Pu fission rates, the space dependency of the calculated-to-experiment values was found at the active core close to the internal blanket

  2. Effects of species biological traits and environmental heterogeneity on simulated tree species distribution shifts under climate change

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser

    2018-01-01

    Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...

  3. Why are most aquatic plants widely distributed? Dispersal, clonal growth and small-scale heterogeneity in a stressful environment

    NARCIS (Netherlands)

    Santamaria, L.

    2002-01-01

    Non-marine aquatic vascular plants generally show broad distributional ranges. Climatic factors seem to have limited effects on their distributions, besides the determination of major disjunctions (tropical-temperate-subarctic). Dispersal should have been frequent enough to assure the quick

  4. Variable EBV DNA Load Distributions and Heterogeneous EBV mRNA Expression Patterns in the Circulation of Solid Organ versus Stem Cell Transplant Recipients

    Directory of Open Access Journals (Sweden)

    A. E. Greijer

    2012-01-01

    Full Text Available Epstein-Barr virus (EBV driven post-transplant lymphoproliferative disease (PTLD is a heterogeneous and potentially life-threatening condition. Early identification of aberrant EBV activity may prevent progression to B-cell lymphoma. We measured EBV DNA load and RNA profiles in plasma and cellular blood compartments of stem cell transplant (SCT; n=5, solid organ transplant recipients (SOT; n=15, and SOT having chronic elevated EBV-DNA load (n=12. In SCT, EBV DNA was heterogeneously distributed, either in plasma or leukocytes or both. In SOT, EBV DNA load was always cell associated, predominantly in B cells, but occasionally in T cells (CD4 and CD8 or monocytes. All SCT with cell-associated EBV DNA showed BARTs and EBNA1 expression, while LMP1 and LMP2 mRNA was found in 1 and 3 cases, respectively. In SOT, expression of BARTs was detected in all leukocyte samples. LMP2 and EBNA1 mRNA was found in 5/15 and 2/15, respectively, but LMP1 mRNA in only 1, coinciding with severe PTLD and high EBV DNA. Conclusion: EBV DNA is differently distributed between white cells and plasma in SOT versus SCT. EBV RNA profiling in blood is feasible and may have added value for understanding pathogenic virus activity in patients with elevated EBV-DNA.

  5. Application of cluster and discriminant analyses to diagnose lithological heterogeneity of the parent material according to its particle-size distribution

    Science.gov (United States)

    Giniyatullin, K. G.; Valeeva, A. A.; Smirnova, E. V.

    2017-08-01

    Particle-size distribution in soddy-podzolic and light gray forest soils of the Botanical Garden of Kazan Federal University has been studied. The cluster analysis of data on the samples from genetic soil horizons attests to the lithological heterogeneity of the profiles of all the studied soils. It is probable that they are developed from the two-layered sediments with the upper colluvial layer underlain by the alluvial layer. According to the discriminant analysis, the major contribution to the discrimination of colluvial and alluvial layers is that of the fraction >0.25 mm. The results of canonical analysis show that there is only one significant discriminant function that separates alluvial and colluvial sediments on the investigated territory. The discriminant function correlates with the contents of fractions 0.05-0.01, 0.25-0.05, and >0.25 mm. Classification functions making it possible to distinguish between alluvial and colluvial sediments have been calculated. Statistical assessment of particle-size distribution data obtained for the plow horizons on ten plowed fields within the garden indicates that this horizon is formed from colluvial sediments. We conclude that the contents of separate fractions and their ratios cannot be used as a universal criterion of the lithological heterogeneity. However, adequate combination of the cluster and discriminant analyses makes it possible to give a comprehensive assessment of the lithology of soil samples from data on the contents of sand and silt fractions, which considerably increases the information value and reliability of the results.

  6. Chromatin Heterogeneity and Distribution of Regulatory Elements in the Late-Replicating Intercalary Heterochromatin Domains of Drosophila melanogaster Chromosomes.

    Directory of Open Access Journals (Sweden)

    Varvara A Khoroshko

    Full Text Available Late-replicating domains (intercalary heterochromatin in the Drosophila genome display a number of features suggesting their organization is quite unique. Typically, they are quite large and encompass clusters of functionally unrelated tissue-specific genes. They correspond to the topologically associating domains and conserved microsynteny blocks. Our study aims at exploring further details of molecular organization of intercalary heterochromatin and has uncovered surprising heterogeneity of chromatin composition in these regions. Using the 4HMM model developed in our group earlier, intercalary heterochromatin regions were found to host chromatin fragments with a particular epigenetic profile. Aquamarine chromatin fragments (spanning 0.67% of late-replicating regions are characterized as a class of sequences that appear heterogeneous in terms of their decompactization. These fragments are enriched with enhancer sequences and binding sites for insulator proteins. They likely mark the chromatin state that is related to the binding of cis-regulatory proteins. Malachite chromatin fragments (11% of late-replicating regions appear to function as universal transitional regions between two contrasting chromatin states. Namely, they invariably delimit intercalary heterochromatin regions from the adjacent active chromatin of interbands. Malachite fragments also flank aquamarine fragments embedded in the repressed chromatin of late-replicating regions. Significant enrichment of insulator proteins CP190, SU(HW, and MOD2.2 was observed in malachite chromatin. Neither aquamarine nor malachite chromatin types appear to correlate with the positions of highly conserved non-coding elements (HCNE that are typically replete in intercalary heterochromatin. Malachite chromatin found on the flanks of intercalary heterochromatin regions tends to replicate earlier than the malachite chromatin embedded in intercalary heterochromatin. In other words, there exists a

  7. Influence of environmental heterogeneity on the distribution and persistence of a subterranean rodent in a highly unstable landscape.

    Science.gov (United States)

    Gómez Fernández, María Jimena; Boston, Emma S M; Gaggiotti, Oscar E; Kittlein, Marcelo J; Mirol, Patricia M

    2016-12-01

    In this study we combine information from landscape characteristics, demographic inference and species distribution modelling to identify environmental factors that shape the genetic distribution of the fossorial rodent Ctenomys. We sequenced the mtDNA control region and amplified 12 microsatellites from 27 populations distributed across the Iberá wetland ecosystem. Hierarchical Bayesian modelling was used to construct phylogenies and estimate divergence times. We developed species distribution models to determine what climatic variables and soil parameters predicted species presence by comparing the current to the historic and predicted future distribution of the species. Finally, we explore the impact of environmental variables on the genetic structure of Ctenomys based on current and past species distributions. The variables that consistently correlated with the predicted distribution of the species and explained the observed genetic differentiation among populations included the distribution of well-drained sandy soils and temperature seasonality. A core region of stable suitable habitat was identified from the Last Interglacial, which is projected to remain stable into the future. This region is also the most genetically diverse and is currently under strong anthropogenic pressure. Results reveal complex demographic dynamics, which have been in constant change in both time and space, and are likely linked to the evolution of the Paraná River. We suggest that any alteration of soil properties (climatic or anthropic) may significantly impact the availability of suitable habitat and consequently the ability of individuals to disperse. The protection of this core stable habitat is of prime importance given the increasing levels of human disturbance across this wetland system and the threat of climate change.

  8. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  9. A Methodolgy, Based on Analytical Modeling, for the Design of Parallel and Distributed Architectures for Relational Database Query Processors.

    Science.gov (United States)

    1987-12-01

    Application Programs Intelligent Disk Database Controller Manangement System Operating System Host .1’ I% Figure 2. Intelligent Disk Controller Application...8217. /- - • Database Control -% Manangement System Disk Data Controller Application Programs Operating Host I"" Figure 5. Processor-Per- Head data. Therefore, the...However. these ad- ditional properties have been proven in classical set and relation theory [75]. These additional properties are described here

  10. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  11. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes with distribution in Mexico

    Directory of Open Access Journals (Sweden)

    Gonzalo Pinilla-Buitrago

    2014-06-01

    Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first

  12. Plant distribution patterns related to species characteristics and spatial and temporal habitat heterogeneity in a network of ditch banks

    NARCIS (Netherlands)

    Geertsema, W.; Sprangers, J.T.C.M.

    2002-01-01

    In this study we investigated the relationship between the distribution patterns of a number of herbaceous plant species and the isolation and age of habitat patches. The study was conducted for a network of ditch banks in an agricultural landscape in The Netherlands. Thirteen plant species were

  13. Distributed coordination of heterogeneous agents using a semantic overlay network and a goal-directed graphplan planner.

    Directory of Open Access Journals (Sweden)

    António Luís Lopes

    Full Text Available In this paper, we describe a distributed coordination system that allows agents to seamlessly cooperate in problem solving by partially contributing to a problem solution and delegating the subproblems for which they do not have the required skills or knowledge to appropriate agents. The coordination mechanism relies on a dynamically built semantic overlay network that allows the agents to efficiently locate, even in very large unstructured networks, the necessary skills for a specific problem. Each agent performs partial contributions to the problem solution using a new distributed goal-directed version of the Graphplan algorithm. This new goal-directed version of the original Graphplan algorithm provides an efficient solution to the problem of "distraction", which most forward-chaining algorithms suffer from. We also discuss a set of heuristics to be used in the backward-search process of the planning algorithm in order to distribute this process amongst idle agents in an attempt to find a solution in less time. The evaluation results show that our approach is effective in building a scalable and efficient agent society capable of solving complex distributable problems.

  14. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    Science.gov (United States)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  15. Dynamic heterogeneity in life histories

    DEFF Research Database (Denmark)

    Tuljapurkar, Shripad; Steiner, Uli; Orzack, Steven Hecht

    2009-01-01

    or no fixed heterogeneity influences this trait. We propose that dynamic heterogeneity provides a 'neutral' model for assessing the possible role of unobserved 'quality' differences between individuals. We discuss fitness for dynamic life histories, and the implications of dynamic heterogeneity...... generate dynamic heterogeneity: life-history differences produced by stochastic stratum dynamics. We characterize dynamic heterogeneity in a range of species across taxa by properties of the Markov chain: the entropy, which describes the extent of heterogeneity, and the subdominant eigenvalue, which...... distributions of lifetime reproductive success. Dynamic heterogeneity contrasts with fixed heterogeneity: unobserved differences that generate variation between life histories. We show by an example that observed distributions of lifetime reproductive success are often consistent with the claim that little...

  16. cDNA cloning, mRNA distribution and heterogeneity, chromosomal location, and RFLP analysis of human osteopontin (OPN)

    DEFF Research Database (Denmark)

    Young, M F; Kerr, J M; Termine, J D

    1990-01-01

    A human osteopontin (OP) cDNA was isolated from a library made from primary cultures of human bone cells. The distribution of osteopontin mRNA in human tissues was investigated by Northern analysis and showed that the human message was predominant in cultures of bone cells and in decidua cells...... osteopontin cDNA indicated that the gene is a single copy with an approximate length of 5.4-8.2 kb....

  17. Evaluation of heterogeneity dose distributions for Stereotactic Radiotherapy (SRT: comparison of commercially available Monte Carlo dose calculation with other algorithms

    Directory of Open Access Journals (Sweden)

    Takahashi Wataru

    2012-02-01

    Full Text Available Abstract Background The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC calculations, in actual computed tomography (CT scans for use in stereotactic radiotherapy (SRT of small lung cancers. Methods Slow CT scan of 20 patients was performed and the internal target volume (ITV was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS. The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs. Collapsed cone convolution (CCC from Pinnacle3, superposition (SP from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs were compared with each other. Results The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC doses and the minimal dose received by 95% of the PTV (PTV95 compared to XVMC. The differences in mean doses were 2.96 Gy (6.17% for IC and 5.02 Gy (11.18% for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17, and -2.67% (p = 0.0036, respectively. Conclusions Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT.

  18. Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    OpenAIRE

    Otto, Carola

    2013-01-01

    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account.

  19. Heterogeneity of demand responses in modelling the distributional consequences of tradable carbon permits in the road transport sector

    International Nuclear Information System (INIS)

    Wadud, Zia; Noland, Robert B.; Graham, Daniel J.

    2007-01-01

    Personal road transport sector is one of the largest and fastest growing sources of CO 2 emissions. This paper investigates a tradable permit policy for mitigating carbon emissions from personal road transport and discusses various issues of permit allocation. As tradable permits will effectively raise the price of fuel, the policy has important distributional implications. The distribution of burden depends on permit allocation strategies and on the consumer response to an increase in price. The behavioural response may vary among different segments of the population depending on their travel needs, which in turn are contingent upon their income, location of residence and other factors. Consumer Expenditure Survey micro dataset from 1997 to 2002 has been used to econometrically model the possible variation of price elasticity for different socio-economic groups in the USA. Results indicate that the response of gasoline demand to a change in price does depend on income level or location of the household. Distributional impacts of the tradable permit policy are then evaluated using the micro dataset for year 2002. In this regard, different permit allocation schemes are considered in the analysis. Impacts on households owning a vehicle and households with no vehicles have been evaluated as well

  20. Heterogeneous distribution of pelagic sediments incoming the Japan Trench possibly controlling slip propagation on shallow plate boundary fault

    Science.gov (United States)

    Yamaguchi, A.; Nakamura, Y.; Fukuchi, R.; Kurano, H.; Ikehara, K.; Kanamatsu, T.; Arai, K.; Usami, K.; Ashi, J.

    2017-12-01

    Catastrophic tsunami of the 2011 Tohoku Earthquake was triggered by large coseismic slip reached to the Japan Trench axis (e.g. Fujiwara et al., 2011, Science; Kodaira et al., 2012, Nature Geoscience). Results of the IODP Expedition 343 (JFAST) suggest that small friction of smectite-rich pelagic clay caused slip propagation on shallow plate boundary fault (Ujiie et al., 2013, Science; Kameda et al., 2015, Geology; Moore et al., 2015, Geosphere). On the other hand, JAMSTEC high-resolution seismic profiles show that incoming sediments have large heterogeneities in thicknesses, and two areas of extremely thin sediments on the Pacific Plate (thickness less than 100 m) were found at around 39°N (Nakamura et al., AGU 2017, this session). To reconcile whether the smectite-rich pelagic clay even exists in these areas, we sampled surface sediments during the R/V Shinsei Maru KS-15-3 cruise. Seven piston cores were retrieved from seaward trench slope, horst, graben, and graben edge. Core lithologies are mainly diatomaceous ooze/clay including tephra layers, not resemble to pelagic clays discovered in JFAST. Ages of tephra layers were estimated by correlating mineral assemblages and refractive indices of volcanic glasses to Japanese widespread tephras. Averaged sedimentation rates of seaward trench slope, horst, graben, and graben edge are estimated to be 25-30, 6.5-20, 45, 0.9 cm/kyr, respectively. These sedimentation rates imply that sediments on seaward trench slope and horst have been deposited in the last 160-500 kyr, suggesting that entire pelagic sediments, including smectite-rich pelagic clay, have been removed by some reasons in the last 0.5 million years. Possible reason for such modification of sediment is near-trench igneous activity known as petit-spot volcanism (Hirano et al., 2006, Science). The lack of smectite-rich pelagic clay near 39°N of the Japan Trench is consistent with results of tsunami inversions proposing shallow large coseismic slip propagated

  1. Beyond-laboratory-scale prediction for channeling flows through subsurface rock fractures with heterogeneous aperture distributions revealed by laboratory evaluation

    Science.gov (United States)

    Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi

    2015-01-01

    The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.

  2. A Complex Network Theory Approach for the Spatial Distribution of Fire Breaks in Heterogeneous Forest Landscapes for the Control of Wildland Fires.

    Science.gov (United States)

    Russo, Lucia; Russo, Paola; Siettos, Constantinos I

    2016-01-01

    Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.

  3. A Complex Network Theory Approach for the Spatial Distribution of Fire Breaks in Heterogeneous Forest Landscapes for the Control of Wildland Fires.

    Directory of Open Access Journals (Sweden)

    Lucia Russo

    Full Text Available Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a an artificial forest of randomly distributed density of vegetation, and (b a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.

  4. Developing a Distributed Consensus-Based Cooperative Adaptive Cruise Control System for Heterogeneous Vehicles with Predecessor Following Topology

    Directory of Open Access Journals (Sweden)

    Ziran Wang

    2017-01-01

    Full Text Available Connected and automated vehicle (CAV has become an increasingly popular topic recently. As an application, Cooperative Adaptive Cruise Control (CACC systems are of high interest, allowing CAVs to communicate with each other and coordinating their maneuvers to form platoons, where one vehicle follows another with a constant velocity and/or time headway. In this study, we propose a novel CACC system, where distributed consensus algorithm and protocol are designed for platoon formation, merging maneuvers, and splitting maneuvers. Predecessor following information flow topology is adopted for the system, where each vehicle only communicates with its following vehicle to reach consensus of the whole platoon, making the vehicle-to-vehicle (V2V communication fast and accurate. Moreover, different from most studies assuming the type and dynamics of all the vehicles in a platoon to be homogenous, we take into account the length, location of GPS antenna on vehicle, and braking performance of different vehicles. A simulation study has been conducted under scenarios including normal platoon formation, platoon restoration from disturbances, and merging and splitting maneuvers. We have also carried out a sensitivity analysis on the distributed consensus algorithm, investigating the effect of the damping gain on convergence rate, driving comfort, and driving safety of the system.

  5. A statistical image analysis framework for pore-free islands derived from heterogeneity distribution of nuclear pore complexes.

    Science.gov (United States)

    Mimura, Yasuhiro; Takemoto, Satoko; Tachibana, Taro; Ogawa, Yutaka; Nishimura, Masaomi; Yokota, Hideo; Imamoto, Naoko

    2017-11-24

    Nuclear pore complexes (NPCs) maintain cellular homeostasis by mediating nucleocytoplasmic transport. Although cyclin-dependent kinases (CDKs) regulate NPC assembly in interphase, the location of NPC assembly on the nuclear envelope is not clear. CDKs also regulate the disappearance of pore-free islands, which are nuclear envelope subdomains; this subdomain gradually disappears with increase in homogeneity of the NPC in response to CDK activity. However, a causal relationship between pore-free islands and NPC assembly remains unclear. Here, we elucidated mechanisms underlying NPC assembly from a new perspective by focusing on pore-free islands. We proposed a novel framework for image-based analysis to automatically determine the detailed 'landscape' of pore-free islands from a large quantity of images, leading to the identification of NPC intermediates that appear in pore-free islands with increased frequency in response to CDK activity. Comparison of the spatial distribution between simulated and the observed NPC intermediates within pore-free islands showed that their distribution was spatially biased. These results suggested that the disappearance of pore-free islands is highly related to de novo NPC assembly and indicated the existence of specific regulatory mechanisms for the spatial arrangement of NPC assembly on nuclear envelopes.

  6. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    Science.gov (United States)

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and

  7. Quantifying the impact of inter-site heterogeneity on the distribution of ChIP-seq data

    Directory of Open Access Journals (Sweden)

    Jonathan eCairns

    2014-11-01

    Full Text Available Chromatin Immunoprecipitation followed by sequencing (ChIP-seq is a valuable tool for epigenetic studies. Analysis of the data arising from ChIP-seq experiments often requires implicit or explicit statistical modelling of the read counts. The simple Poisson model is attractive, but does not provide a good fit to observed ChIP-seq data. Researchers therefore often either extend to a more general model (e.g. the Negative Binomial, and/or exclude regions of the genome that do not conform to the model. Since many modelling strategies employed for ChIP-seq data reduce to fitting a mixture of Poisson distributions, we explore the problem of inferring the optimal mixing distribution. We apply the Constrained Newton Method (CNM, which suggests the Negative Binomial - Negative Binomial (NB-NB mixture model as a candidate for modelling ChIP-seq data. We illustrate fitting the NB-NB model with an accelerated EM algorithm on four data sets from three species. Zero-inflated models have been suggested as an approach to improve model fit for ChIP-seq data. We show that the NB-NB mixture model requires no zero-inflation and suggest that in some cases the need for zero inflation is driven by the model's inability to cope with both artefactual large read counts and the frequently observed very low read counts.We see that the CNM-based approach is a useful diagnostic for the assessment of model fit and inference in ChIP-seq data and beyond. Use of the suggested NB-NB mixture model will be of value not only when calling peaks or otherwise modelling ChIP-seq data, but also when simulating data or constructing blacklists de novo.

  8. The spatial distribution and chemical heterogeneity of clinoptilolite at Yucca Mountain, Nye County, Nevada: Evidence for polygenetic hypogene alteration

    International Nuclear Information System (INIS)

    Livingston, D.E.; Szymanski, J.S.

    1994-01-01

    This part of TRAC's Annual Report for 1993 summarizes the finding of previous reports on the major element geochemistry of zeolitic alteration of the tuffs at Yucca Mountain and updates the status of work. In this report we examine the spatial distribution of zeolites by stratigraphic units and boreholes and the various types of chemical alteration of clinoptilolite indicated by the data reported in Broxton et al. and Bish and Chipera. The purpose is to evaluate the extent of the metasomatic alteration and to test the hypogene hypothesis of Szymanski. In this regard, it is of prime importance to evaluate whether the metasomatic alteration at Yucca Mountain is due to supergene or hypogene processes. In this report, the term open-quotes supergeneclose quotes denotes alteration and mineralization produced by fluids derived directly from atmospheric precipitation and infiltration through the vadose zone, and the term open-quotes hypogeneclose quotes denotes alteration and mineralization produced by fluids from the phreatic zone regardless of their former location or residence time in the Earth's crust. This report begins with a review of previous work on the genesis of zeolites of the Nevada Test Site

  9. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogeneous clinical data

    Science.gov (United States)

    Sittig, Dean F.; Hazlehurst, Brian L.; Brown, Jeffrey; Murphy, Shawn; Rosenman, Marc; Tarczy-Hornoch, Peter; Wilcox, Adam B.

    2012-01-01

    Comparative Effectiveness Research (CER) has the potential to transform the current healthcare delivery system by identifying the most effective medical and surgical treatments, diagnostic tests, disease prevention methods and ways to deliver care for specific clinical conditions. To be successful, such research requires the identification, capture, aggregation, integration, and analysis of disparate data sources held by different institutions with diverse representations of the relevant clinical events. In an effort to address these diverse demands, there have been multiple new designs and implementations of informatics platforms that provide access to electronic clinical data and the governance infrastructure required for inter-institutional CER. The goal of this manuscript is to help investigators understand why these informatics platforms are required and to compare and contrast six, large-scale, recently funded, CER-focused informatics platform development efforts. We utilized an 8-dimension, socio-technical model of health information technology use to help guide our work. We identified six generic steps that are necessary in any distributed, multi-institutional CER project: data identification, extraction, modeling, aggregation, analysis, and dissemination. We expect that over the next several years these projects will provide answers to many important, and heretofore unanswerable, clinical research questions. PMID:22692259

  10. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogenous clinical data.

    Science.gov (United States)

    Sittig, Dean F; Hazlehurst, Brian L; Brown, Jeffrey; Murphy, Shawn; Rosenman, Marc; Tarczy-Hornoch, Peter; Wilcox, Adam B

    2012-07-01

    Comparative effectiveness research (CER) has the potential to transform the current health care delivery system by identifying the most effective medical and surgical treatments, diagnostic tests, disease prevention methods, and ways to deliver care for specific clinical conditions. To be successful, such research requires the identification, capture, aggregation, integration, and analysis of disparate data sources held by different institutions with diverse representations of the relevant clinical events. In an effort to address these diverse demands, there have been multiple new designs and implementations of informatics platforms that provide access to electronic clinical data and the governance infrastructure required for interinstitutional CER. The goal of this manuscript is to help investigators understand why these informatics platforms are required and to compare and contrast 6 large-scale, recently funded, CER-focused informatics platform development efforts. We utilized an 8-dimension, sociotechnical model of health information technology to help guide our work. We identified 6 generic steps that are necessary in any distributed, multi-institutional CER project: data identification, extraction, modeling, aggregation, analysis, and dissemination. We expect that over the next several years these projects will provide answers to many important, and heretofore unanswerable, clinical research questions.

  11. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite

    Energy Technology Data Exchange (ETDEWEB)

    Guimarães, Vanessa [Instituto de Ciências da Terra – Porto, DGAOT, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Geobiotec. Departamento de Geociências da Universidade de Aveiro, Campo Universitário de Santiago, 3810-193 Aveiro (Portugal); Rodríguez-Castellón, Enrique; Algarra, Manuel [Departamento de Química Inorgánica, Facultad de Ciencias, Universidad de Málaga. Campus de Teatino s/n, 29071 Málaga (Spain); Rocha, Fernando [Geobiotec. Departamento de Geociências da Universidade de Aveiro, Campo Universitário de Santiago, 3810-193 Aveiro (Portugal); Bobos, Iuliu, E-mail: ibobos@fc.up.pt [Instituto de Ciências da Terra – Porto, DGAOT, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal)

    2016-11-05

    Highlights: • The UO{sub 2}{sup 2+} sorption at pH 4 and 6 on heterogeneous smectite structure. • The cation exchange process is affected by layer charge distribution. • Surface complexation and cation exchange modelling. • New binding energy components identified by X-ray photoelectron spectroscopy. - Abstract: The UO{sub 2}{sup 2+} adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I = 0.02 M) and pH 6 (I = 0.2 M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8 nm (sample PS2), to 5.1 nm (sample PS3) and, to 7.4 nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO{sub 2}{sup 2+} sorption on smectite. The amount of UO{sub 2}{sup 2+} adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8 ± 0.3 and 382.2 ± 0.3 eV, assigned to hydrated UO{sub 2}{sup 2+} adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f{sub 7/2} peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3 ± 0.3 and 381.8 ± 0.3 eV assigned to ≡AlOUO{sub 2}{sup +} and ≡SiOUO{sub 2}{sup +} surface species were observed at pH 6.

  12. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite

    International Nuclear Information System (INIS)

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-01-01

    Highlights: • The UO_2"2"+ sorption at pH 4 and 6 on heterogeneous smectite structure. • The cation exchange process is affected by layer charge distribution. • Surface complexation and cation exchange modelling. • New binding energy components identified by X-ray photoelectron spectroscopy. - Abstract: The UO_2"2"+ adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I = 0.02 M) and pH 6 (I = 0.2 M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8 nm (sample PS2), to 5.1 nm (sample PS3) and, to 7.4 nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO_2"2"+ sorption on smectite. The amount of UO_2"2"+ adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8 ± 0.3 and 382.2 ± 0.3 eV, assigned to hydrated UO_2"2"+ adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f_7_/_2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3 ± 0.3 and 381.8 ± 0.3 eV assigned to ≡AlOUO_2"+ and ≡SiOUO_2"+ surface species were observed at pH 6.

  13. Epigenetic and conventional regulation is distributed among activators of FLO11 allowing tuning of population-level heterogeneity in its expression.

    Directory of Open Access Journals (Sweden)

    Leah M Octavio

    2009-10-01

    Full Text Available Epigenetic switches encode their state information either locally, often via covalent modification of DNA or histones, or globally, usually in the level of a trans-regulatory factor. Here we examine how the regulation of cis-encoded epigenetic switches controls the extent of heterogeneity in gene expression, which is ultimately tied to phenotypic diversity in a population. We show that two copies of the FLO11 locus in Saccharomyces cerevisiae switch between a silenced and competent promoter state in a random and independent fashion, implying that the molecular event leading to the transition occurs locally at the promoter, in cis. We further quantify the effect of trans regulators both on the slow epigenetic transitions between a silenced and competent promoter state and on the fast promoter transitions associated with conventional regulation of FLO11. We find different classes of regulators affect epigenetic, conventional, or both forms of regulation. Distributing kinetic control of epigenetic silencing and conventional gene activation offers cells flexibility in shaping the distribution of gene expression and phenotype within a population.

  14. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  15. An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network

    OpenAIRE

    Amit Chougule; Pravin Ghewari

    2011-01-01

    With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...

  16. Root Systems of Individual Plants, and the Biotic and Abiotic Factors Controlling Their Depth and Distribution: a Synthesis Using a Global Database.

    Science.gov (United States)

    Tumber-Davila, S. J.; Schenk, H. J.; Jackson, R. B.

    2017-12-01

    This synthesis examines plant rooting distributions globally, by doubling the number of entries in the Root Systems of Individual Plants database (RSIP) created by Schenk and Jackson. Root systems influence many processes, including water and nutrient uptake and soil carbon storage. Root systems also mediate vegetation responses to changing climatic and environmental conditions. Therefore, a collective understanding of the importance of rooting systems to carbon sequestration, soil characteristics, hydrology, and climate, is needed. Current global models are limited by a poor understanding of the mechanisms affecting rooting, carbon stocks, and belowground biomass. This improved database contains an extensive bank of records describing the rooting system of individual plants, as well as detailed information on the climate and environment from which the observations are made. The expanded RSIP database will: 1) increase our understanding of rooting depths, lateral root spreads and above and belowground allometry; 2) improve the representation of plant rooting systems in Earth System Models; 3) enable studies of how climate change will alter and interact with plant species and functional groups in the future. We further focus on how plant rooting behavior responds to variations in climate and the environment, and create a model that can predict rooting behavior given a set of environmental conditions. Preliminary results suggest that high potential evapotranspiration and seasonality of precipitation are indicative of deeper rooting after accounting for plant growth form. When mapping predicted deep rooting by climate, we predict deepest rooting to occur in equatorial South America, Africa, and central India.

  17. Effects of Supported ( n BuCp) 2 ZrCl 2 Catalyst Active-Center Distribution on Ethylene–1-Hexene Copolymer Backbone Heterogeneity and Thermal Behaviors

    KAUST Repository

    Atiqullah, Muhammad

    2013-07-10

    Two catalysts, denoted as catalyst 1 [silica/MAO/(nBuCp) 2ZrCl2] and catalyst 2 [silica/nBuSnCl 3/MAO/(nBuCp)2ZrCl2] were synthesized and subsequently used to prepare, without separate feeding of methylaluminoxane (MAO), ethylene homopolymer 1 and homopolymer 2, respectively, and ethylene-1-hexene copolymer 1 and copolymer 2, respectively. Gel permeation chromatography (GPC), Crystaf, differential scanning calorimetry (DSC) [conventional and successive self-nucleation and annealing (SSA)], and 13C nuclear magnetic resonance (NMR) polymer characterization results were used, as appropriate, to model the catalyst active-center distribution, ethylene sequence (equilibrium crystal) distribution, and lamellar thickness distribution (both continuous and discrete). Five different types of active centers were predicted in each catalyst, as corroborated by the SSA experiments and complemented by an extended X-ray absorption fine structure (EXAFS) report published in the literature. 13C NMR spectroscopy also supported this active-center multiplicity. Models combined with experiments effectively illustrated how and why the active-center distribution and the variance in the design of the supported MAO anion, having different electronic and steric effects and coordination environments, influence the concerned copolymerization mechanism and polymer properties, including inter- and intrachain compositional heterogeneity and thermal behaviors. Copolymerization occurred according to the first-order Markovian terminal model, producing fairly random copolymers with minor skewedness toward blocky character. For each copolymer, the theoretical most probable ethylene sequences, nE MPDSC-GT and n E MPNMR-Flory, as well as the weight-average lamellar thicknesses, Lwav DSC-GT and Lwav SSA DSC, were found to be comparable. To the best of our knowledge, such a match has not previously been reported. The percentage crystallinities of the homo- and copolymers increased linearly as a function of

  18. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite.

    Science.gov (United States)

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-11-05

    The UO2(2+) adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I=0.02M) and pH 6 (I=0.2M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8nm (sample PS2), to 5.1nm (sample PS3) and, to 7.4nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO2(2+) sorption on smectite. The amount of UO2(2+) adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8±0.3 and 382.2±0.3eV, assigned to hydrated UO2(2+) adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f7/2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3±0.3 and 381.8±0.3eV assigned to AlOUO2(+) and SiOUO2(+) surface species were observed at pH 6. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The distribution of blood eosinophil levels in a Japanese COPD clinical trial database and in the rest of the world

    Science.gov (United States)

    Ishii, Takeo; Hizawa, Nobuyuki; Midwinter, Dawn; James, Mark; Hilton, Emma; Jones, Paul W

    2018-01-01

    Background Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS) for patients with chronic obstructive pulmonary disease (COPD). Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta2-agonist (LABA) treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients. Methods A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II–IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts). Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment. Findings Among Japanese patients, the median (interquartile range) absolute eosinophil count was 170 cells/mm3 (100–280 cells/mm3). Overall, 612/1,094 Japanese patients (56%) had an absolute eosinophil count ≥150 cells/mm3 and 902/1,304 Japanese patients (69%) had a percentage eosinophil ≥2%. Among non-Japanese patients, these values were 160 (100–250) cells/mm3, 2,842/5,151 patients (55%), and 2,937/5,155 patients (57%), respectively. The eosinophil distribution among Japanese patients was similar to that among non-Japanese patients. Within multi-country studies with similar inclusion criteria, the eosinophil count was numerically lower in Japanese compared with non-Japanese patients (median 120 vs 160 cells/mm3). Interpretation The eosinophil distribution in Japanese patients seems comparable to that of non-Japanese patients; although within multi-country studies, there was a slightly lower median eosinophil count for Japanese patients compared with

  20. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target-species distribution and abundance studies. Section 1, Part 2. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  1. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target species distribution and abundance studies. Section 1, Part 1. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  2. The distribution of blood eosinophil levels in a Japanese COPD clinical trial database and in the rest of the world

    Directory of Open Access Journals (Sweden)

    Barnes N

    2018-02-01

    Full Text Available Neil Barnes,1,2 Takeo Ishii,3,4 Nobuyuki Hizawa,5 Dawn Midwinter,6 Mark James,3 Emma Hilton,1 Paul Jones1,71Respiratory Medicine Franchise, GlaxoSmithKline, Brentford, UK; 2William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, London, UK; 3Medical Affairs, GlaxoSmithKline K.K., Tokyo, Japan; 4Graduate School of Medicine, Nippon Medical School, Tokyo, Japan; 5Department of Pulmonary Medicine, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan; 6Global Respiratory Department, GlaxoSmithKline, Stockley Park, UK; 7Institute of Infection and Immunity, St George’s University of London, London, UK Background: Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS for patients with chronic obstructive pulmonary disease (COPD. Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta2-agonist (LABA treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients.Methods: A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II–IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts. Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment.Findings: Among Japanese patients, the median (interquartile range absolute eosinophil count was 170 cells/mm3 (100–280 cells/mm3. Overall, 612/1,094 Japanese patients (56% had an absolute eosinophil count ≥150 cells/mm3 and 902/1,304 Japanese patients (69% had a percentage eosinophil ≥2%. Among non

  3. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high

  4. Distribution and classification of Serine β-lactamases in Brazilian Hospital Sewage and Other Environmental Metagenomes deposited in Public Databases

    Directory of Open Access Journals (Sweden)

    Adriana Fróes

    2016-11-01

    Full Text Available β-lactam is the most used antibiotic class in the clinical area and it acts on blocking the bacteria cell wall synthesis, causing cell death. However, some bacteria have evolved resistance to these antibiotics mainly due the production of enzymes known as β-lactamases. Hospital sewage is an important source of dispersion of multidrug-resistant bacteria in rivers and oceans. In this work, we used next-generation DNA sequencing to explore the diversity and dissemination of serine β-lactamases in two hospital sewage from Rio de Janeiro, Brazil (South -SZ- and North Zone -NZ, presenting different profiles, and to compare them with public environmental data available. Also, we propose a Hidden-Markov-Model approach to screen potential serine β-lactamases genes (in public environments samples and generated hospital sewage data, exploring its evolutionary relationships. Due to the high variability in β-lactamases, we used a position-specific scoring matrix search method (RPS-BLAST against conserved domain database profiles (CDD, Pfam, and COG followed by visual inspection to detect conserved motifs, to increase the reliability of the results and remove possible false positives. We were able to identify novel β-lactamases from Brazilian hospital sewage and to estimate relative abundance of its types. The highest relative abundance found in SZ was the Class A (50%, while Class D is predominant in NZ (55%. CfxA (65% and ACC (47% types were the most abundant genes detected in SZ, while in NZ the most frequent were OXA-10 (32%, CfxA (28%, ACC (21%, CEPA (20% and FOX (19%. Phylogenetic analysis revealed β-lactamases from Brazilian hospital sewage grouped in the same clade and close to sequences belonging to Firmicutes and Bacteroidetes groups, but distant from potential β-lactamases screened from public environmental data, that grouped closer to β-lactamases of Proteobacteria. Our results demonstrated that HMM-based approach identified homologs of

  5. Distribution and Classification of Serine β-Lactamases in Brazilian Hospital Sewage and Other Environmental Metagenomes Deposited in Public Databases.

    Science.gov (United States)

    Fróes, Adriana M; da Mota, Fábio F; Cuadrat, Rafael R C; Dávila, Alberto M R

    2016-01-01

    β-lactam is the most used antibiotic class in the clinical area and it acts on blocking the bacteria cell wall synthesis, causing cell death. However, some bacteria have evolved resistance to these antibiotics mainly due the production of enzymes known as β-lactamases. Hospital sewage is an important source of dispersion of multidrug-resistant bacteria in rivers and oceans. In this work, we used next-generation DNA sequencing to explore the diversity and dissemination of serine β-lactamases in two hospital sewage from Rio de Janeiro, Brazil (South Zone, SZ and North Zone, NZ), presenting different profiles, and to compare them with public environmental data available. Also, we propose a Hidden-Markov-Model approach to screen potential serine β-lactamases genes (in public environments samples and generated hospital sewage data), exploring its evolutionary relationships. Due to the high variability in β-lactamases, we used a position-specific scoring matrix search method (RPS-BLAST) against conserved domain database profiles (CDD, Pfam, and COG) followed by visual inspection to detect conserved motifs, to increase the reliability of the results and remove possible false positives. We were able to identify novel β-lactamases from Brazilian hospital sewage and to estimate relative abundance of its types. The highest relative abundance found in SZ was the Class A (50%), while Class D is predominant in NZ (55%). CfxA (65%) and ACC (47%) types were the most abundant genes detected in SZ, while in NZ the most frequent were OXA-10 (32%), CfxA (28%), ACC (21%), CEPA (20%), and FOX (19%). Phylogenetic analysis revealed β-lactamases from Brazilian hospital sewage grouped in the same clade and close to sequences belonging to Firmicutes and Bacteroidetes groups, but distant from potential β-lactamases screened from public environmental data, that grouped closer to β-lactamases of Proteobacteria. Our results demonstrated that HMM-based approach identified homologs of

  6. Mass and charge distributions of amyloid fibers involved in neurodegenerative diseases: mapping heterogeneity and polymorphism† †Electronic supplementary information (ESI) available: Experimental section and supplementary figures. See DOI: 10.1039/c7sc04542e

    Science.gov (United States)

    Pansieri, Jonathan; Halim, Mohammad A.; Vendrely, Charlotte; Dumoulin, Mireille; Legrand, François; Sallanon, Marcelle Moulin; Chierici, Sabine; Denti, Simona; Dagany, Xavier; Dugourd, Philippe; Marquette, Christel

    2018-01-01

    Heterogeneity and polymorphism are generic features of amyloid fibers with some important effects on the related disease development. We report here the characterization, by charge detection mass spectrometry, of amyloid fibers made of three polypeptides involved in neurodegenerative diseases: Aβ1–42 peptide, tau and α-synuclein. Beside the mass of individual fibers, this technique enables to characterize the heterogeneity and the polymorphism of the population. In the case of Aβ1–42 peptide and tau protein, several coexisting species could be distinguished and characterized. In the case of α-synuclein, we show how the polymorphism affects the mass and charge distributions. PMID:29732065

  7. Heterogeneous reactors

    International Nuclear Information System (INIS)

    Moura Neto, C. de; Nair, R.P.K.

    1979-08-01

    The microscopic study of a cell is meant for the determination of the infinite multiplication factor of the cell, which is given by the four factor formula: K(infinite) = n(epsilon)pf. The analysis of an homogeneous reactor is similar to that of an heterogeneous reactor, but each factor of the four factor formula can not be calculated by the formulas developed in the case of an homogeneous reactor. A great number of methods was developed for the calculation of heterogeneous reactors and some of them are discussed. (Author) [pt

  8. Spatial distribution of clinical computer systems in primary care in England in 2016 and implications for primary care electronic medical record databases: a cross-sectional population study.

    Science.gov (United States)

    Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M

    2018-02-28

    UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Florabank1: a grid-based database on vascular plant distribution in the northern part of Belgium (Flanders and the Brussels Capital region

    Directory of Open Access Journals (Sweden)

    Wouter Van Landuyt

    2012-05-01

    Full Text Available Florabank1 is a database that contains distributional data on the wild flora (indigenous species, archeophytes and naturalised aliens of Flanders and the Brussels Capital Region. It holds about 3 million records of vascular plants, dating from 1800 till present. Furthermore, it includes ecological data on vascular plant species, redlist category information, Ellenberg values, legal status, global distribution, seed bank etc. The database is an initiative of “Flo.Wer” (www.plantenwerkgroep.be, the Research Institute for Nature and Forest (INBO: www.inbo.be and the National Botanic Garden of Belgium (www.br.fgov.be. Florabank aims at centralizing botanical distribution data gathered by both professional and amateur botanists and to make these data available to the benefit of nature conservation, policy and scientific research.The occurrence data contained in Florabank1 are extracted from checklists, literature and herbarium specimen information. Of survey lists, the locality name (verbatimLocality, species name, observation date and IFBL square code, the grid system used for plant mapping in Belgium (Van Rompaey 1943, is recorded. For records dating from the period 1972–2004 all pertinent botanical journals dealing with Belgian flora were systematically screened. Analysis of herbarium specimens in the collection of the National Botanic Garden of Belgium, the University of Ghent and the University of Liège provided interesting distribution knowledge concerning rare species, this information is also included in Florabank1. The data recorded before 1972 is available through the Belgian GBIF node (http://data.gbif.org/datasets/resource/10969/, not through FLORABANK1, to avoid duplication of information. A dedicated portal providing access to all published Belgian IFBL records at this moment is available at: http://projects.biodiversity.be/ifblAll data in Florabank1 is georeferenced. Every record holds the decimal centroid coordinates of the

  10. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  12. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  13. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-01

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  14. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes.

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-21

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  15. data mining in distributed database

    International Nuclear Information System (INIS)

    Ghunaim, A.A.A.

    2007-01-01

    as we march into the age of digital information, the collection and the storage of large quantities of data is increased, and the problem of data overload looms ominously ahead. it is estimated today that the volume of data stored by a company doubles every year but the amount of meaningful information is decreases rapidly. the ability to analyze and understand massive datasets lags far behind the ability to gather and store the data. the unbridled growth of data will inevitably lead to a situation in which it is increasingly difficult to access the desired information; it will always be like looking for a needle in a haystack, and where only the amount of hay will be growing all the time . so, a new generation of computational techniques and tools is required to analyze and understand the rapidly growing volumes of data . and, because the information technology (it) has become a strategic weapon in the modern life, it is needed to use a new decision support tools to be an international powerful competitor.data mining is one of these tools and its methods make it possible to extract decisive knowledge needed by an enterprise and it means that it concerned with inferring models from data , including statistical pattern recognition, applied statistics, machine learning , and neural networks. data mining is a tool for increasing productivity of people trying to build predictive models. data mining techniques have been applied successfully to several real world problem domains; but the application in the nuclear reactors field has only little attention . one of the main reasons, is the difficulty in obtaining the data sets

  16. Query Optimization in Distributed Databases.

    Science.gov (United States)

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  17. in Heterogeneous Media

    Directory of Open Access Journals (Sweden)

    Saeed Balouchi

    2013-01-01

    Full Text Available Fractured reservoirs contain about 85 and 90 percent of oil and gas resources respectively in Iran. A comprehensive study and investigation of fractures as the main factor affecting fluid flow or perhaps barrier seems necessary for reservoir development studies. High degrees of heterogeneity and sparseness of data have incapacitated conventional deterministic methods in fracture network modeling. Recently, simulated annealing (SA has been applied to generate stochastic realizations of spatially correlated fracture networks by assuming that the elastic energy of fractures follows Boltzmann distribution. Although SA honors local variability, the objective function of geometrical fracture modeling is defined for homogeneous conditions. In this study, after the introduction of SA and the derivation of the energy function, a novel technique is presented to adjust the model with highly heterogeneous data for a fractured field from the southwest of Iran. To this end, the regular object-based model is combined with a grid-based technique to cover the heterogeneity of reservoir properties. The original SA algorithm is also modified by being constrained in different directions and weighting the energy function to make it appropriate for heterogeneous conditions. The simulation results of the presented approach are in good agreement with the observed field data.

  18. Sales Comparison Approach Indicating Heterogeneity of Particular Type of Real Estate and Corresponding Valuation Accuracy

    Directory of Open Access Journals (Sweden)

    Martin Cupal

    2017-01-01

    Full Text Available The article focuses on heterogeneity of goods, namely real estate and consequently deals with market valuation accuracy. The heterogeneity of real estate property is, in particular, that every unit is unique in terms of its construction, condition, financing and mainly location and thus assessing the value must necessarily be difficult. This research also indicates the rate of efficiency of markets across the types based on their level of variability. The research is based on two databases consisting of various types of real estate with specific market parameters. These parameters determine the differences across the types and reveal heterogeneity. The first database has been set on valuations by sales comparison approach and the second one on data of real properties offered on the market. The methodology is based on univariate and multivariate statistics of key variables of those databases. The multivariate analysis is performed by Hotelling T2 control chart and statistics with appropriate numerical characteristics. The results of both databases were joint by weights with regard to the dependence criterion of the variables. The final results indicate potential valuation accuracy across the types. The main contribution of the research is that the evaluation was not only derived from the price deviation or distribution, but it also draws from causes of real property heterogeneity as a whole.

  19. Percolation in Heterogeneous Media

    International Nuclear Information System (INIS)

    Vocka, Radim

    1999-01-01

    This work is a theoretical reflection on the problematic of the modeling of heterogeneous media, that is on the way of their simple representation conserving their characteristic features. Two particular problems are addressed in this thesis. Firstly, we study the transport in porous media, that is in a heterogeneous media which structure is quenched. A pore space is represented in a simple way - a pore is symbolized as a tube of a given length and a given diameter. The fact that the correlations in the distribution of pore sizes are taken into account by a construction of a hierarchical network makes possible the modeling of porous media with a porosity distributed over several length scales. The transport in the hierarchical network shows qualitatively different phenomena from those observed in simpler models. A comparison of numerical results with experimental data shows that the hierarchical network gives a good qualitative representation of the structure of real porous media. Secondly, we study a problem of the transport in a heterogeneous media which structure is evolving during the time. The models where the evolution of the structure is not influenced by the transport are studied in detail. These models present a phase transition of the same nature as that observed on the percolation networks. We propose a new theoretical description of this transition, and we express critical exponents describing the evolution of the conductivity as a function of fundamental exponents of percolation theory. (author) [fr

  20. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  1. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece); Loudos, George [Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [Medical Information Processing Laboratory (LaTIM), National Institute of Health and Medical Research (INSERM), 29609 Brest (France)

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines

  2. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    International Nuclear Information System (INIS)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-01-01

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  3. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  4. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  5. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  6. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  7. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  8. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  9. Objective sampling design in a highly heterogeneous landscape - characterizing environmental determinants of malaria vector distribution in French Guiana, in the Amazonian region.

    Science.gov (United States)

    Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle

    2013-12-01

    Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.

  10. An object-oriented framework for managing cooperating legacy databases

    NARCIS (Netherlands)

    Balsters, H; de Brock, EO

    2003-01-01

    We describe a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. Our approach to database federation is based on the UML/OCL data

  11. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  12. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  13. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  14. A Moving Optical Fibre Technique for Structure Analysis of Heterogenous Products: Application to the Determination of the Bubble-Size Distribution in Liquid Foams

    OpenAIRE

    Bisperink, C. G. J.; Akkerman, J. C.; Prins, A.; Ronteltap, A. D.

    1992-01-01

    The bubble-size distribution in liquid foams measured as a function of time can be used to distinguish between the physical processes that determine the breakdown of foams. A new method based on an optical fibre technique was developed to measure various foam characteristics e.g. the rate of drainage, the rate of foam collapse, the change in gas fraction, interbubble gas diffusion (disproportionation) and the evolution of the bubble - size distribution during the ageing of the foam. The metho...

  15. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  16. A dedicated database system for handling multi-level data in systems biology

    OpenAIRE

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Background Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging...

  17. Income and Wealth Distribution in a Neoclassical Two-Sector Heterogeneous-Households Growth Model with Elastic Labor Supply and Consumer Durable Goods

    Directory of Open Access Journals (Sweden)

    Wei-Bin ZHANG

    2017-06-01

    Full Text Available This paper proposes a two-sector two-group growth model with elastic labor supply and consumer durable goods. We study dynamics of wealth and income distribution in a competitive economy with capital accumulation as the main engine of economic growth. The model is built on the Uzawa two-sector model. It is also influenced by the neoclassical growth theory and the post-Keynesian theory of growth and distribution. We plot the motion of the economic system and determine the economic equilibrium. We carry out comparative dynamic analysis with regard to the propensity to save and improvements in human capital and technology.

  18. Leucosome distribution in migmatitic paragneisses and orthogneisses: A record of self-organized melt migration and entrapment in a heterogeneous partially-molten crust

    Science.gov (United States)

    Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  19. Geophysical Imaging for Investigating the Delivery and Distribution of Amendments in the Heterogeneous Subsurface of the F.E. Warren AFB

    Science.gov (United States)

    2012-12-01

    grain-fluid interface (Revil and Glover 1997, 1998). The electrical current distribution can be visualized by equipotential surfaces , with current flow...dimensional µg/L micrograms per liter µS/cm micro-Siemens per centimeter bgs below ground surface CERCLA Comprehensive Environmental Response...during, and after fracture emplacement using both surface and crosshole-based configurations. Several seismic methods were tested, including

  20. Quantitative imaging reveals heterogeneous growth dynamics and treatment-dependent residual tumor distributions in a three-dimensional ovarian cancer model

    Science.gov (United States)

    Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba

    2010-09-01

    Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.

  1. Job Heterogeneity and Coordination Frictions

    DEFF Research Database (Denmark)

    Kennes, John; le Maire, Daniel

    We develop a new directed search model of a frictional labor market with a continuum of heterogenous workers and firms. We estimate two versions of the model - auction and price posting - using Danish data on wages and productivities. Assuming heterogenous workers with no comparative advantage, we...... the job ladder, how the identification of assortative matching is fundamentally different in directed and undirected search models, how our theory accounts for business cycle facts related to inter-temporal changes in job offer distributions, and how our model could also be used to identify...

  2. Chloride Transport in Heterogeneous Formation

    Science.gov (United States)

    Mukherjee, A.; Holt, R. M.

    2017-12-01

    The chloride mass balance (CMB) is a commonly-used method for estimating groundwater recharge. Observations of the vertical distribution of pore-water chloride are related to the groundwater infiltration rates (i.e. recharge rates). In CMB method, the chloride distribution is attributed mainly to the assumption of one dimensional piston flow. In many places, however, the vertical distribution of chloride will be influenced by heterogeneity, leading to horizontal movement of infiltrating waters. The impact of heterogeneity will be particularly important when recharge is locally focused. When recharge is focused in an area, horizontal movement of chloride-bearing waters, coupled with upward movement driven by evapotranspiration, may lead to chloride bulges that could be misinterpreted if the CMB method is used to estimate recharge. We numerically simulate chloride transport and evaluate the validity of the CMB method in highly heterogeneous systems. This simulation is conducted for the unsaturated zone of Ogallala, Antlers, and Gatuna (OAG) formations in Andrews County, Texas. A two dimensional finite element model will show the movement of chloride through heterogeneous systems. We expect to see chloride bulges not only close to the surface but also at depths characterized by horizontal or upward movement. A comparative study of focused recharge estimates in this study with available recharge data will be presented.

  3. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  4. Heterogeneous Distributions of Amino Acids Provide Evidence of Multiple Sources Within the Almahata Sitta Parent Body, Asteroid 2008 TC(sub 3)

    Science.gov (United States)

    Burton, Aaron S.; Glavin, Daniel P.; Callahan, Michael P.; Dworkin, Jason P.; Jenniskens, Peter; Shaddad, Muawia H.

    2011-01-01

    Two new fragments of the Almahata Sitta meteorite and a sample of sand from the related strewn field in the Nubian Desert, Sudan, were analyzed for two to six carbon aliphatic primary amino acids by ultrahigh performance liquid chromatography with UV-fluorescence detection and time-of-flight mass spectrometry (LC-FT/ToF-MS). The distribution of amino acids in fragment #25, an H5 ordinary chondrite, and fragment #27, a polymict ureilite, were compared with results from the previously analyzed fragment #4, also a polymict ureilite. All three meteorite fragments contain 180-270 parts-per-billion (ppb) of amino acids, roughly 1000-fold lower than the total amino acid abundance of the Murchison carbonaceous chondrite. All of the Almahata Sitta fragments analyzed have amino acid distributions that differ from the Nubian Desert sand, which primarily contains L-alpha-amino acids. In addition, the meteorites contain several amino acids that were not detected in the sand, indicating that many of the amino acids are extraterrestrial in origin. Despite their petrological differences, meteorite fragments #25 and #27 contain similar amino acid compositions; however, the distribution of amino acids in fragment #27 was distinct from those in fragment #4, even though both arc polymict ureilites from the same parent body. Unlike in CM2 and CR2/3 meteorites, there are low relative abundances of alpha-amino acids in the Almahata Sitta meteorite fragments, which suggest that Strecker-type chemistry was not a significant amino acid formation mechanism. Given the high temperatures that asteroid 2008 TC3 appears to have experienced and lack of evidence for aqueous alteration on the asteroid, it is possible that the extraterrestrial amino acids detected in Almahata Sitta were formed by Fischer-Tropsch/Haber-Bosch type gas-grain reactions at elevated temperatures.

  5. Fluid motion and solute distribution around sinking aggregates I : Small-scale fluxes and heterogeneity of nutrients in the pelagic environment

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Ploug, H.; Thygesen, Uffe Høgsbro

    2001-01-01

    in the ambient water. We described the fluid flow and solute distribution around a sinking aggregate by solving the Navier- Stokes' equations and the advection-diffusion equations numerically. The model is valid for Reynolds numbers characteristic of marine snow, up to Re = 20. The model demonstrates...... in its wake, where solute concentration is either elevated (leaking substances) or depressed (consumed substances) relative to ambient concentration. Such plumes may impact the nutrition of osmotrophs. For example, based on published solubilization rates of aggregates we describe the amino acid plume...

  6. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  7. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  8. Quantifying hidden individual heterogeneity

    DEFF Research Database (Denmark)

    Steiner, Ulrich; Lenart, Adam; Vaupel, James W.

    Aging is assumed to be driven by the accumulation of damage or some other aging factor which shapes demographic patterns, including the classical late age mortality plateaus. However to date, heterogeneity in these damage stages is not observed. Here, we estimate underlying stage distributions...... and stage dynamics, based on observed survival patterns of isoclonal bacteria. Our results reveal demographic dynamics being dominated by low damage stages and transmission of damage from mother to daughters is low. Still, our models are too simplistic and deterministic. Explaining the observed data...... requires more stochastic processes as our current models includes. We are only at the beginning of understanding the diverse mechanism behind aging and the shaping of senescence....

  9. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  10. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  11. Influence of the Cr2O3 sintering additive on the homogenization of the plutonium distribution inside an heterogeneous MOX pellet

    International Nuclear Information System (INIS)

    Pieragnoli, A.

    2007-12-01

    This work has revealed the nature of the Cr 2 O 3 action mechanisms on the development of the microstructure of a MOX pellet and particularly on the improvement of the plutonium distribution. At first, it has been necessary to study thoroughly the description of the interaction phenomena occurring inside the U-Pu-Cr-O system. A model system constituted by the same materials UO 2 , (U, Pu)O 2 and Cr 2 O 3 than those present in a MOX pellet and thermically heated in similar sintering conditions has been carried out. These tests have been completed by studies concerning the reactivity between PuO 2 and Cr 2 O 3 , the interdiffusion between UO 2 and (U, Pu)O 2 in presence of chromium and the solubility of chromium in (U, Pu)O 2 . Then, with all the data acquired, it has been possible to describe the evolution of a MOX pellet in presence of chromium during the sintering of the microstructure. Microstructural characteristics such as the plutonium homogenization degree and the grain size have been studied with temperature and sintering level period. The chromium oxide inside microstructure has been studied too. At last, an interpretation of the influence of the presence of chromium on the development of a MOX pellet microstructure has been given in focusing particularly on the plutonium distribution. This interpretation is based on the formation of the (U, Pu)CrO 3 phase and on the plutonium oxidation degree stabilization (+III) by chromium at the grain boundaries level. Advices aiming at optimizing the chromium impact on the development of microstructure are given. In most of the cases, these advices are based on solutions which will contribute, during the sintering thermal treatment, to the presence at lower temperature of the (U, Pu)CrO 3 phase and to keep longer a greater quantity of chromium inside the MOX pellet. (O.M.)

  12. Understanding the heterogeneity in volume overload and fluid distribution in decompensated heart failure is key to optimal volume management: role for blood volume quantitation.

    Science.gov (United States)

    Miller, Wayne L; Mullan, Brian P

    2014-06-01

    This study sought to quantitate total blood volume (TBV) in patients hospitalized for decompensated chronic heart failure (DCHF) and to determine the extent of volume overload, and the magnitude and distribution of blood volume and body water changes following diuretic therapy. The accurate assessment and management of volume overload in patients with DCHF remains problematic. TBV was measured by a radiolabeled-albumin dilution technique with intravascular volume, pre-to-post-diuretic therapy, evaluated at hospital admission and at discharge. Change in body weight in relation to quantitated TBV was used to determine interstitial volume contribution to total fluid loss. Twenty-six patients were prospectively evaluated. Two patients had normal TBV at admission. Twenty-four patients were hypervolemic with TBV (7.4 ± 1.6 liters) increased by +39 ± 22% (range, +9.5% to +107%) above the expected normal volume. With diuresis, TBV decreased marginally (+30 ± 16%). Body weight declined by 6.9 ± 5.2 kg, and fluid intake/fluid output was a net negative 8.4 ± 5.2 liters. Interstitial compartment fluid loss was calculated at 6.2 ± 4.0 liters, accounting for 85 ± 15% of the total fluid reduction. TBV analysis demonstrated a wide range in the extent of intravascular overload. Dismissal measurements revealed marginally reduced intravascular volume post-diuretic therapy despite large reductions in body weight. Mobilization of interstitial fluid to the intravascular compartment with diuresis accounted for this disparity. Intravascular volume, however, remained increased at dismissal. The extent, composition, and distribution of volume overload are highly variable in DCHF, and this variability needs to be taken into account in the approach to individualized therapy. TBV quantitation, particularly serial measurements, can facilitate informed volume management with respect to a goal of treating to euvolemia. Copyright © 2014 American College of Cardiology Foundation. Published

  13. The Recently Discovered Bokeloh Bat Lyssavirus: Insights Into Its Genetic Heterogeneity and Spatial Distribution in Europe and the Population Genetics of Its Primary Host.

    Science.gov (United States)

    Eggerbauer, Elisa; Troupin, Cécile; Passior, Karsten; Pfaff, Florian; Höper, Dirk; Neubauer-Juric, Antonie; Haberl, Stephanie; Bouchier, Christiane; Mettenleiter, Thomas C; Bourhy, Hervé; Müller, Thomas; Dacheux, Laurent; Freuling, Conrad M

    2017-01-01

    In 2010, a novel lyssavirus named Bokeloh bat lyssavirus (BBLV) was isolated from a Natterer's bat (Myotis nattereri) in Germany. Two further viruses were isolated in the same country and in France in recent years, all from the same bat species and all found in moribund or dead bats. Here we report the description and the full-length genome sequence of five additional BBLV isolates from Germany (n=4) and France (n=1). Interestingly, all of them were isolated from the Natterer's bat, except one from Germany, which was found in a common Pipistrelle bat (Pipistrellus pipistrellus), a widespread and abundant bat species in Europe. The latter represents the first case of transmission of BBLV to another bat species. Phylogenetic analysis clearly demonstrated the presence of two different lineages among this lyssavirus species: lineages A and B. The spatial distribution of these two lineages remains puzzling, as both of them comprised isolates from France and Germany; although clustering of isolates was observed on a regional scale, especially in Germany. Phylogenetic analysis based on the mitochondrial cytochrome b (CYTB) gene from positive Natterer's bat did not suggest a circulation of the respective BBLV sublineages in specific Natterer's bat subspecies, as all of them were shown to belong to the M. nattereri sensu stricto clade/subspecies and were closely related (German and French positive bats). At the bat host level, we demonstrated that the distribution of BBLV at the late stage of the disease seems large and massive, as viral RNA was detected in many different organs. © 2017 Elsevier Inc. All rights reserved.

  14. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  15. Database Perspectives on Blockchains

    OpenAIRE

    Cohen, Sara; Zohar, Aviv

    2018-01-01

    Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the u...

  16. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  17. Photon propagation in heterogeneous optical media with spatial correlations: enhanced mean-free-paths and wider-than-exponential free-path distributions

    International Nuclear Information System (INIS)

    Davis, A.B.; Marshak, Alexander

    2004-01-01

    Beer's law of exponential decay in direct transmission is well-known but its break-down in spatially variable optical media has been discussed only sporadically in the literature. We document here this break-down in three-dimensional (3D) media with complete generality and explore its ramifications for photon propagation. We show that effective transmission laws and their associated free-path distributions (FPDs) are in fact never exactly exponential in variable media of any kind. Moreover, if spatial correlations in the extinction field extend at least to the scale of the mean-free-path (MFP), FPDs are necessarily wider-than-exponential in the sense that all higher-order moments of the relevant mean-field FPDs exceed those of the exponential FPD, even if it is tuned to yield the proper MFP. The MFP itself is always larger than the inverse of average extinction in a variable medium. In a vast and important class of spatially-correlated random media, the MFP is indeed the average of the inverse of extinction. We translate these theoretical findings into a practical method for deciding a priori when 3D effects become important. Finally, we discuss an obvious but limited analogy between our analysis of spatial variability and the well-known effects of strong spectral variability in gaseous media when observed or modeled at moderate resolution

  18. Multilingual Federated Searching Across Heterogeneous Collections.

    Science.gov (United States)

    Powell, James; Fox, Edward A.

    1998-01-01

    Describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. Details Searchable Database Markup Language (SearchDB-ML) for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages. (Author)

  19. A dedicated database system for handling multi-level data in systems biology.

    Science.gov (United States)

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  20. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  1. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    Science.gov (United States)

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  2. Renewal-anomalous-heterogeneous files

    International Nuclear Information System (INIS)

    Flomenbom, Ophir

    2010-01-01

    Renewal-anomalous-heterogeneous files are solved. A simple file is made of Brownian hard spheres that diffuse stochastically in an effective 1D channel. Generally, Brownian files are heterogeneous: the spheres' diffusion coefficients are distributed and the initial spheres' density is non-uniform. In renewal-anomalous files, the distribution of waiting times for individual jumps is not exponential as in Brownian files, yet obeys: ψ α (t)∼t -1-α , 0 2 >, obeys, 2 >∼ 2 > nrml α , where 2 > nrml is the MSD in the corresponding Brownian file. This scaling is an outcome of an exact relation (derived here) connecting probability density functions of Brownian files and renewal-anomalous files. It is also shown that non-renewal-anomalous files are slower than the corresponding renewal ones.

  3. Estimating medication stopping fraction and real-time prevalence of drug use in pharmaco-epidemiologic databases. An application of the reverse waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2017-01-01

    Purpose: To introduce the reverse waiting time distribution (WTD) and show how it can be used to estimate stopping fractions and real-time prevalence of treatment in pharmacoepidemiological studies. Methods: The reverse WTD is the distribution of time from the last dispensed prescription of each......-hoc decision rules for automated implementations, and it yields estimates of real-time prevalence....... patient within a time window to the end of it. It is a mirrored version of the ordinary WTD, which considers the first dispensed prescription of patients within a time window. Based on renewal process theory, the reverse WTD can be analyzed as an ordinary WTD with maximum likelihood estimation. Based...

  4. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  5. Macroeconomic Policies and Agent Heterogeneity

    OpenAIRE

    GOTTLIEB, Charles

    2012-01-01

    Defence date: 24 February 2012 Examining Board: Giancarlo Corsetti, Arpad Abraham, Juan Carlos Conesa, Jonathan Heathcote. This thesis contributes to the understanding of macroeconomic policies’ impact on the distribution of wealth. It belongs to the strand of literature that departs from the representative agent assumption and perceives agent heterogeneity and the induced disparities in wealth accumulation, as an important dimension of economic policy-making. Within such economic envir...

  6. Vertical distribution of chlorophyll a concentration and phytoplankton community composition from in situ fluorescence profiles: a first database for the global ocean

    Science.gov (United States)

    Sauzède, R.; Lavigne, H.; Claustre, H.; Uitz, J.; Schmechtig, C.; D'Ortenzio, F.; Guinet, C.; Pesant, S.

    2015-10-01

    In vivo chlorophyll a fluorescence is a proxy of chlorophyll a concentration, and is one of the most frequently measured biogeochemical properties in the ocean. Thousands of profiles are available from historical databases and the integration of fluorescence sensors to autonomous platforms has led to a significant increase of chlorophyll fluorescence profile acquisition. To our knowledge, this important source of environmental data has not yet been included in global analyses. A total of 268 127 chlorophyll fluorescence profiles from several databases as well as published and unpublished individual sources were compiled. Following a robust quality control procedure detailed in the present paper, about 49 000 chlorophyll fluorescence profiles were converted into phytoplankton biomass (i.e., chlorophyll a concentration) and size-based community composition (i.e., microphytoplankton, nanophytoplankton and picophytoplankton), using a method specifically developed to harmonize fluorescence profiles from diverse sources. The data span over 5 decades from 1958 to 2015, including observations from all major oceanic basins and all seasons, and depths ranging from the surface to a median maximum sampling depth of around 700 m. Global maps of chlorophyll a concentration and phytoplankton community composition are presented here for the first time. Monthly climatologies were computed for three of Longhurst's ecological provinces in order to exemplify the potential use of the data product. Original data sets (raw fluorescence profiles) as well as calibrated profiles of phytoplankton biomass and community composition are available on open access at PANGAEA, Data Publisher for Earth and Environmental Science. Raw fluorescence profiles: http://doi.pangaea.de/10.1594/PANGAEA.844212 and Phytoplankton biomass and community composition: http://doi.pangaea.de/10.1594/PANGAEA.844485

  7. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  8. Earthquake scaling laws for rupture geometry and slip heterogeneity

    Science.gov (United States)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  9. Creating a database for evaluating the distribution of energy deposited at prostate using simulation in phantom with the Monte Carlo code EGSnrc

    International Nuclear Information System (INIS)

    Resende Filho, T.A.; Vieira, I.F.; Leal Neto, V.

    2009-01-01

    An exposition computational model (ECM) composed of a water tank phantom, a punctual and mono energetic source, emitter of photons, coupled to a Monte Carlo code to simulation the interaction and deposition of energy emitted by I-125, is a tool that presents many advantages to realize dosimetric evaluations in many areas as planning of a brachytherapy treatments. Using the DOSXYZnrc, was possible to construct a data bank allowing the final user estimates previously the space distribution of the prostate dose, being an important tool at the brachytherapy procedure. The results obtained show the fractional energy deposited into the water phantom evaluated on the energies 0.028 MeV and 0.035 MeV both indicated to this procedure, as well the dose distribution at the range between 0.10334 and 0.53156 μGy. The medium error is less than 2%, limited tolerance value considered at radiotherapy protocols. (author)

  10. Is the spatial distribution of brain lesions associated with closed-head injury predictive of subsequent development of attention-deficit/hyperactivity disorder? Analysis with brain-image database

    Science.gov (United States)

    Herskovits, E. H.; Megalooikonomou, V.; Davatzikos, C.; Chen, A.; Bryan, R. N.; Gerring, J. P.

    1999-01-01

    PURPOSE: To determine whether there is an association between the spatial distribution of lesions detected at magnetic resonance (MR) imaging of the brain in children after closed-head injury and the development of secondary attention-deficit/hyperactivity disorder (ADHD). MATERIALS AND METHODS: Data obtained from 76 children without prior history of ADHD were analyzed. MR images were obtained 3 months after closed-head injury. After manual delineation of lesions, images were registered to the Talairach coordinate system. For each subject, registered images and secondary ADHD status were integrated into a brain-image database, which contains depiction (visualization) and statistical analysis software. Using this database, we assessed visually the spatial distributions of lesions and performed statistical analysis of image and clinical variables. RESULTS: Of the 76 children, 15 developed secondary ADHD. Depiction of the data suggested that children who developed secondary ADHD had more lesions in the right putamen than children who did not develop secondary ADHD; this impression was confirmed statistically. After Bonferroni correction, we could not demonstrate significant differences between secondary ADHD status and lesion burdens for the right caudate nucleus or the right globus pallidus. CONCLUSION: Closed-head injury-induced lesions in the right putamen in children are associated with subsequent development of secondary ADHD. Depiction software is useful in guiding statistical analysis of image data.

  11. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible i...

  12. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  13. Heterogeneous Beliefs, Public Information, and Option Markets

    DEFF Research Database (Denmark)

    Qin, Zhenjiang

    In an incomplete market setting with heterogeneous prior beliefs, I show that public information and strike price of option have substantial infl‡uence on asset pricing in option markets, by investigating an absolute option pricing model with negative exponential utility investors and normally...... distributed dividend. I demonstrate that heterogeneous prior variances give rise to the economic value of option markets. Investors speculate in option market and public information improves allocational efficiency of markets only when there is heterogeneity in prior variance. Heterogeneity in mean is neither...... a necessary nor sufficient condition for generating speculations in option markets. With heterogeneous beliefs, options are non-redundant assets which can facilitate side-betting and enable investors to take advantage of the disagreements and the differences in con…dence. This fact leads to a higher growth...

  14. Fiber Bundle Model Under Heterogeneous Loading

    Science.gov (United States)

    Roy, Subhadeep; Goswami, Sanchari

    2018-03-01

    The present work deals with the behavior of fiber bundle model under heterogeneous loading condition. The model is explored both in the mean-field limit as well as with local stress concentration. In the mean field limit, the failure abruptness decreases with increasing order k of heterogeneous loading. In this limit, a brittle to quasi-brittle transition is observed at a particular strength of disorder which changes with k. On the other hand, the model is hardly affected by such heterogeneity in the limit where local stress concentration plays a crucial role. The continuous limit of the heterogeneous loading is also studied and discussed in this paper. Some of the important results related to fiber bundle model are reviewed and their responses to our new scheme of heterogeneous loading are studied in details. Our findings are universal with respect to the nature of the threshold distribution adopted to assign strength to an individual fiber.

  15. Heterogeneity in the WTP for recreational access

    DEFF Research Database (Denmark)

    Campbell, Danny; Vedel, Suzanne Elizabeth; Thorsen, Bo Jellesmark

    2014-01-01

    In this study we have addressed appropriate modelling of heterogeneity in willingness to pay (WTP) for environmental goods, and have demonstrated its importance using a case of forest access in Denmark. We compared WTP distributions for four models: (1) a multinomial logit model, (2) a mixed logit...... model assuming a univariate Normal distribution, (3) or assuming a multivariate Normal distribution allowing for correlation across attributes, and (4) a mixture of two truncated Normal distributions, allowing for correlation among attributes. In the first two models mean WTP for enhanced access...... was negative. However, models accounting for preference heterogeneity found a positive mean WTP, but a large sub-group with negative WTP. Accounting for preference heterogeneity can alter overall conclusions, which highlights the importance of this for policy recommendations....

  16. Practical authorization in large heterogeneous distributed systems

    International Nuclear Information System (INIS)

    Fletcher, J.G.; Nessett, D.M.

    1992-11-01

    Requirements for access control, especially authorization, in practical computing environments are listed and discussed. These are used as the basis for a critique of existing access control mechanisms, which are found to present difficulties. A new mechanism, fire of many of these difficulties, is then described and critiqued

  17. License - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE License License to Use This Database Last updated : 2014/07/17 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database. The license for this datab...re Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: ... ...re . With regard to this database, you are licensed to: freely access part or whole of this database, and ac...quire data; freely redistribute part or whole of the data from this database; and freely create and distribute database

  18. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  19. Heterogeneous burnable poisons:

    International Nuclear Information System (INIS)

    Leiva, Sergio; Agueda, Horacio; Russo, Diego

    1989-01-01

    The use of materials possessing high neutron absorption cross-section commonly known as 'burnable poisons' have its origin in BWR reactors with the purpose of improving the efficiency of the first fuel load. Later on, it was extended to PWR to compensate of initial reactivity without infringing the requirement of maintaining a negative moderator coefficient. The present tendency is to increase the use of solid burnable poisons to extend the fuel cycle life and discharge burnup. There are two concepts for the burnable poisons utilization: 1) heterogeneously distributions in the form of rods, plates, etc. and 2) homogeneous dispersions of burnable poisons in the fuel. The purpose of this work is to present the results of sinterability studies, performed on Al 2 O 3 -B 4 C and Al 2 O 3 -Gd 2 O 3 systems. Experiments were carried on pressing at room temperature mixtures of powders containing up to 5 wt % of B 4 C or Gd 2 O 3 in Al 2 O 3 and subsequently sintering at 1750 deg C in reducing atmosphere. Evaluation of density, porosity and microstructures were done and a comparison with previous experiences is shown. (Author) [es

  20. A 1998 Workshop on Heterogeneous Computing

    Science.gov (United States)

    1998-09-18

    Programming Heterogenous Computing Systems? Panel Chair: GulA. Agha, University of Illinois, Urbana -Champaign, IL, USA Modular Heterogeneous System...electrical engineering from the University of Illinois, Urbana -Champaign, in 1975. She worked at the I.B.M. T.J. Watson Research Center with the...Distributed System Environment". I Encuentro de Computaciön. Taller de Sistemas Distribuidos y Paralelos. Memorias . Queretaro, Qro. Mexico. September 1997

  1. Heterogeneous Epidemic Model for Assessing Data Dissemination in Opportunistic Networks

    DEFF Research Database (Denmark)

    Rozanova, Liudmila; Alekseev, Vadim; Temerev, Alexander

    2014-01-01

    that amount of data transferred between network nodes possesses a Pareto distribution, implying scale-free properties. In this context, more heterogeneity in susceptibility means the less severe epidemic progression, and, on the contrary, more heterogeneity in infectivity leads to more severe epidemics...... — assuming that the other parameter (either heterogeneity or susceptibility) stays fixed. The results are general enough to be useful for estimating the epidemic progression with no significant acquired immunity — in the cases where Pareto distribution holds....

  2. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  3. 1.15 - Structural Chemogenomics Databases to Navigate Protein–Ligand Interaction Space

    NARCIS (Netherlands)

    Kanev, G.K.; Kooistra, A.J.; de Esch, I.J.P.; de Graaf, C.

    2017-01-01

    Structural chemogenomics databases allow the integration and exploration of heterogeneous genomic, structural, chemical, and pharmacological data in order to extract useful information that is applicable for the discovery of new protein targets and biologically active molecules. Integrated databases

  4. Heterogeneous network architectures

    DEFF Research Database (Denmark)

    Christiansen, Henrik Lehrmann

    2006-01-01

    is flexibility. This thesis investigates such heterogeneous network architectures and how to make them flexible. A survey of algorithms for network design is presented, and it is described how using heuristics can increase the speed. A hierarchical, MPLS based network architecture is described......Future networks will be heterogeneous! Due to the sheer size of networks (e.g., the Internet) upgrades cannot be instantaneous and thus heterogeneity appears. This means that instead of trying to find the olution, networks hould be designed as being heterogeneous. One of the key equirements here...... and it is discussed that it is advantageous to heterogeneous networks and illustrated by a number of examples. Modeling and simulation is a well-known way of doing performance evaluation. An approach to event-driven simulation of communication networks is presented and mixed complexity modeling, which can simplify...

  5. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    Info, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  6. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  7. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  8. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  9. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  10. License - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG License License to Use This Database Last updated : 2013/08/07 You may use this database...se terms regarding the use of this database and the requirements you must follow in using this database. The license for this databas... Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: Ric...Japan is found here . With regard to this database, you are licensed to: freely access part or whole of this database..., and acquire data; freely redistribute part or whole of the data from this database; and freely create and distribute datab

  11. Heterogeneous cellular networks

    CERN Document Server

    Hu, Rose Qingyang

    2013-01-01

    A timely publication providing coverage of radio resource management, mobility management and standardization in heterogeneous cellular networks The topic of heterogeneous cellular networks has gained momentum in industry and the research community, attracting the attention of standardization bodies such as 3GPP LTE and IEEE 802.16j, whose objectives are looking into increasing the capacity and coverage of the cellular networks. This book focuses on recent progresses,  covering the related topics including scenarios of heterogeneous network deployment, interference management i

  12. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  13. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  14. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  15. Heterogeneous continuous-time random walks

    Science.gov (United States)

    Grebenkov, Denis S.; Tupikina, Liubov

    2018-01-01

    We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.

  16. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  17. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  18. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  19. Neurobiological heterogeneity in ADHD

    NARCIS (Netherlands)

    de Zeeuw, P.

    2011-01-01

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly heterogeneous disorder clinically. Symptoms take many forms, from subtle but pervasive attention problems or dreaminess up to disruptive and unpredictable behavior. Interestingly, early neuroscientific work on ADHD assumed either a

  20. Heterogeneous Calculation of {epsilon}

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Alf

    1961-02-15

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of {epsilon}. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer.

  1. Heterogeneous Calculation of ε

    International Nuclear Information System (INIS)

    Jonsson, Alf

    1961-02-01

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of ε. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer

  2. HETEROGENEOUS INTEGRATION TECHNOLOGY

    Science.gov (United States)

    2017-08-24

    AFRL-RY-WP-TR-2017-0168 HETEROGENEOUS INTEGRATION TECHNOLOGY Dr. Burhan Bayraktaroglu Devices for Sensing Branch Aerospace Components & Subsystems...Final September 1, 2016 – May 1, 2017 4. TITLE AND SUBTITLE HETEROGENEOUS INTEGRATION TECHNOLOGY 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER N/A...provide a structure for this review. The history and the current status of integration technologies in each category are examined and product examples are

  3. Imperfect repair and lifesaving in heterogeneous populations

    Energy Technology Data Exchange (ETDEWEB)

    Finkelstein, Maxim [Department of Mathematical Statistics, University of the Free State, PO Box 339, 9300 Bloemfontein (South Africa) and Max Planck Institute for Demographic Research, Rostock (Germany)]. E-mail: FinkelM.SCl@mail.uovs.ac.za

    2007-12-15

    In this theoretical paper we generalize the notion of minimal repair to the heterogeneous case, when the lifetime distribution function can be modeled by continuous or a discrete mixture of distributions. The statistical (black box) minimal repair and the minimal repair based on information just before the failure of an object are considered. The corresponding failure (intensity) rate processes are defined and analyzed. Demographic lifesaving model is also considered: each life is saved (cured) with some probability (or equivalently a proportion of individuals who would have died are now resuscitated and given another chance). Those who are saved experience the statistical minimal repair. Both of these models are based on the Poisson or non-homogeneous Poisson processes of underlying events, which allow for considering heterogeneity. We also consider the new model of imperfect repair in the homogeneous case and present generalizations to the heterogeneous setting.

  4. Distributed Database Storage Solution in Java

    OpenAIRE

    Funck, Johan

    2010-01-01

    Car sales companies have in the last couple of years discovered that there is a big market in storing their customer's summer and winter tires for a small fee. For the customers it is very convenient to get rid of the all known storage problem with season tires. Burlin Motor Umeå is one of these companies and they are offering seasonal storage and change of tires in autumn and spring as well as washing of tires.The main problem for this kind of storage is how to make the storage easy to overv...

  5. Heterogeneous gas core reactor

    International Nuclear Information System (INIS)

    Han, K.I.

    1977-01-01

    Preliminary investigations of a heterogeneous gas core reactor (HGCR) concept suggest that this potential power reactor offers distinct advantages over other existing or conceptual reactor power plants. One of the most favorable features of the HGCR is the flexibility of the power producing system which allows it to be efficiently designed to conform to a desired optimum condition without major conceptual changes. The arrangement of bundles of moderator/coolant channels in a fissionable gas or mixture of gases makes a truly heterogeneous nuclear reactor core. It is this full heterogeneity for a gas-fueled reactor core which accounts for the novelty of the heterogeneous gas core reactor concept and leads to noted significant advantages over previous gas core systems with respect to neutron and fuel economy, power density, and heat transfer characteristics. The purpose of this work is to provide an insight into the design, operating characteristics, and safety of a heterogeneous gas core reactor system. The studies consist mainly of neutronic, energetic and kinetic analyses of the power producing and conversion systems as a preliminary assessment of the heterogeneous gas core reactor concept and basic design. The results of the conducted research indicate a high potential for the heterogeneous gas core reactor system as an electrical power generating unit (either large or small), with an overall efficiency as high as 40 to 45%. The HGCR system is found to be stable and safe, under the conditions imposed upon the analyses conducted in this work, due to the inherent safety of ann expanding gaseous fuel and the intrinsic feedback effects of the gas and water coolant

  6. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  7. Study of the heterogeneities effect in the dose distributions of Leksell Gamma Knife (R), through Monte Carlo simulation; Estudio del efecto de las heterogeneidades en las distribuciones de dosis del Leksell GammaKnife (R), mediante simulacion Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Rojas C, E.L. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico); Al-Dweri, F.M.O.; Lallena R, A.M. [Universidad de Granada, Granada (Spain)]. e-mail: elrc@nuclear.inin.mx

    2005-07-01

    In this work they are studied, by means of Monte Carlo simulation, the effects that take place in the dose profiles that are obtained with the Leksell Gamma Knife (R), when they are kept in account heterogeneities. The considered heterogeneities simulate the skull and the spaces of air that are in the head, like they can be the nasal breasts or the auditory conduits. The calculations were made using the Monte Carlo Penelope simulation code (v. 2003). The geometry of each one of the 201 sources that this instrument is composed, as well as of the corresponding channels of collimation of the Gamma Knife (R), it was described by means of a simplified model of geometry that has been recently studied. The obtained results when they are kept in mind the heterogeneities they present non worthless differences regarding those obtained when those are not considered. These differences are maximum in the proximities of the interfaces among different materials. (Author)

  8. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  9. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  10. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  11. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  12. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  13. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  14. Green heterogeneous wireless networks

    CERN Document Server

    Ismail, Muhammad; Nee, Hans-Peter; Qaraqe, Khalid A; Serpedin, Erchin

    2016-01-01

    This book focuses on the emerging research topic "green (energy efficient) wireless networks" which has drawn huge attention recently from both academia and industry. This topic is highly motivated due to important environmental, financial, and quality-of-experience (QoE) considerations. Specifically, the high energy consumption of the wireless networks manifests in approximately 2% of all CO2 emissions worldwide. This book presents the authors’ visions and solutions for deployment of energy efficient (green) heterogeneous wireless communication networks. The book consists of three major parts. The first part provides an introduction to the "green networks" concept, the second part targets the green multi-homing resource allocation problem, and the third chapter presents a novel deployment of device-to-device (D2D) communications and its successful integration in Heterogeneous Networks (HetNets). The book is novel in that it specifically targets green networking in a heterogeneous wireless medium, which re...

  15. The Brainomics/Localizer database.

    Science.gov (United States)

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Isotopes in heterogeneous catalysis

    CERN Document Server

    Hargreaves, Justin SJ

    2006-01-01

    The purpose of this book is to review the current, state-of-the-art application of isotopic methods to the field of heterogeneous catalysis. Isotopic studies are arguably the ultimate technique in in situ methods for heterogeneous catalysis. In this review volume, chapters have been contributed by experts in the field and the coverage includes both the application of specific isotopes - Deuterium, Tritium, Carbon-14, Sulfur-35 and Oxygen-18 - as well as isotopic techniques - determination of surface mobility, steady state transient isotope kinetic analysis, and positron emission profiling.

  17. Cancer heterogeneity and imaging.

    Science.gov (United States)

    O'Connor, James P B

    2017-04-01

    There is interest in identifying and quantifying tumor heterogeneity at the genomic, tissue pathology and clinical imaging scales, as this may help better understand tumor biology and may yield useful biomarkers for guiding therapy-based decision making. This review focuses on the role and value of using x-ray, CT, MRI and PET based imaging methods that identify, measure and map tumor heterogeneity. In particular we highlight the potential value of these techniques and the key challenges required to validate and qualify these biomarkers for clinical use. Copyright © 2016. Published by Elsevier Ltd.

  18. Temperature dependent heterogeneous rotational correlation in lipids.

    Science.gov (United States)

    Dadashvand, Neda; Othon, Christina M

    2016-11-15

    Lipid structures exhibit complex and highly dynamic lateral structure; and changes in lipid density and fluidity are believed to play an essential role in membrane targeting and function. The dynamic structure of liquids on the molecular scale can exhibit complex transient density fluctuations. Here the lateral heterogeneity of lipid dynamics is explored in free standing lipid monolayers. As the temperature is lowered the probes exhibit increasingly broad and heterogeneous rotational correlation. This increase in heterogeneity appears to exhibit a critical onset, similar to those observed for glass forming fluids. We explore heterogeneous relaxation in in a single constituent lipid monolayer of 1, 2-dimyristoyl-sn-glycero-3-phosphocholine  by measuring the rotational diffusion of a fluorescent probe (1-palmitoyl-2-[1]-sn-glycero-3-phosphocholine), which is embedded in the lipid monolayer at low labeling density. Dynamic distributions are measured using wide-field time-resolved fluorescence anisotropy. The observed relaxation exhibits a narrow, liquid-like distribution at high temperatures (τ ∼ 2.4 ns), consistent with previous experimental measures (Dadashvand et al 2014 Struct. Dyn. 1 054701, Loura and Ramalho 2007 Biochim. Biophys. Acta 1768 467-478). However, as the temperature is quenched, the distribution broadens, and we observe the appearance of a long relaxation population (τ ∼ 16.5 ns). This supports the heterogeneity observed for lipids at high packing densities, and demonstrates that the nanoscale diffusion and reorganization in lipid structures can be significantly complex, even in the simplest amorphous architectures. Dynamical heterogeneity of this form can have a significant impact on the organization, permeability and energetics of lipid membrane structures.

  19. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  20. Mechanical heterogeneity in ionic liquids

    Science.gov (United States)

    Veldhorst, Arno A.; Ribeiro, Mauro C. C.

    2018-05-01

    Molecular dynamics (MD) simulations of five ionic liquids based on 1-alkyl-3-methylimidazolium cations, [CnC1im]+, have been performed in order to calculate high-frequency elastic moduli and to evaluate heterogeneity of local elastic moduli. The MD simulations of [CnC1im][NO3], n = 2, 4, 6, and 8, assessed the effect of domain segregation when the alkyl chain length increases, and [C8C1im][PF6] assessed the effect of strength of anion-cation interaction. Dispersion curves of excitation energies of longitudinal and transverse acoustic, LA and TA, modes were obtained from time correlation functions of mass currents at different wavevectors. High-frequency sound velocity of LA modes depends on the alkyl chain length, but sound velocity for TA modes does not. High-frequency bulk and shear moduli, K∞ and G∞, depend on the alkyl chain length because of a density effect. Both K∞ and G∞ are strongly dependent on the anion. The calculation of local bulk and shear moduli was accomplished by performing bulk and shear deformations of the systems cooled to 0 K. The simulations showed a clear connection between structural and elastic modulus heterogeneities. The development of nano-heterogeneous structure with increasing length of the alkyl chain in [CnC1im][NO3] implies lower values for local bulk and shear moduli in the non-polar domains. The mean value and the standard deviations of distributions of local elastic moduli decrease when [NO3]- is replaced by the less coordinating [PF6]- anion.

  1. HETERO code, heterogeneous procedure for reactor calculation

    International Nuclear Information System (INIS)

    Jovanovic, S.M.; Raisic, N.M.

    1966-11-01

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor η n and flux distribution) is part of this report together with the example of RB reactor square lattice

  2. Derivation of Batho's correction factor for heterogeneities

    International Nuclear Information System (INIS)

    Lulu, B.A.; Bjaerngard, B.E.

    1982-01-01

    Batho's correction factor for dose in a heterogeneous, layered medium is derived from the tissue--air ratio method (TARM). The reason why the Batho factor is superior to the TARM factor at low energy is ascribed to the fact that it accounts for the distribution of the scatter-generating matter along the centerline. The poor behavior of the Batho factor at high energies is explained as a consequence of the lack of electron equilibrium at appreciable depth below the surface. Key words: Batho factor, heterogeneity, inhomogeneity, tissue--air ratio method

  3. Heterogeneous cores for fast breeder reactor

    International Nuclear Information System (INIS)

    Schroeder, R.; Spenke, H.

    1980-01-01

    Firstly, the motivation for heterogeneous cores is discussed. This is followed by an outline of two reactor designs, both of which are variants of the combined ring and island core. These designs are presented by means of figures and detailed tables. Subsequently, a description of two international projects at fast critical zero energy facilities is given. Both of them support the nuclear design of heterogeneous cores. In addition to a survey of these projects, a typical experiment is discussed: the measurement of rate distributions. (orig.) [de

  4. Heterogeneity and Networks

    OpenAIRE

    Goyal, S.

    2018-01-01

    This chapter shows that networks can have large and differentiated effects on behavior and then argues that social and economic pressures facilitate the formation of heterogenous networks. Thus networks can play an important role in understanding the wide diversity in human behaviour and in economic outcomes.

  5. Heterogeneous Computing in Economics

    DEFF Research Database (Denmark)

    Dziubinski, M.P.; Grassi, S.

    2014-01-01

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of C++ Accelerated Massive Parallelism (C++ AMP) recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (J Econ Dyn...

  6. Heterogeneity of Dutch rainfall

    NARCIS (Netherlands)

    Witter, J.V.

    1984-01-01

    Rainfall data for the Netherlands have been used in this study to investigate aspects of heterogeneity of rainfall, in particular local differences in rainfall levels, time trends in rainfall, and local differences in rainfall trend. The possible effect of urbanization and industrialization on the

  7. Heterogeneous chromium catalysts

    NARCIS (Netherlands)

    2005-01-01

    The present invention relates to a heterogeneous chromium catalyst system for the polymerisation of ethylene and/or alpha olefins prepared by the steps of: (a) providing a silica-containing support, (b) treating the silica-containing support with a chromium compound to form a chromium-based

  8. Why does heterogeneity matter?

    Science.gov (United States)

    K.B. Pierce

    2007-01-01

    This is a review of the book "Ecosystem function in heterogeneous landscapes" published in 2005. The authors are G. Lovett, C. Jones, M.G. Turner, and K.C. Weathers. It was published by Springer, New York. The book is a synthesis of the 10th Gary conference held at the Institute of Ecosystem Studies in Millbrook, New York, in 2003.

  9. Heterogeneity and option pricing

    NARCIS (Netherlands)

    Benninga, Simon; Mayshar, Joram

    2000-01-01

    An economy with agents having constant yet heterogeneous degrees of relative risk aversion prices assets as though there were a single decreasing relative risk aversion pricing representative agent. The pricing kernel has fat tails and option prices do not conform to the Black-Scholes formula.

  10. Dispersivity in heterogeneous permeable media

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1994-01-01

    When one fluid displaces another through a one-dimensional porous medium, the composition changes from pure displacing fluid at the inlet to pure displaced fluid some distance downstream. The distance over which an arbitrary percentage of this change occurs is defined as the mixing zone length, which increases with increasing average distance traveled by the displacement front. For continuous injection, the mixing zone size can be determined from a breakthrough curve as the time required for the effluent displacing fluid concentration to change from, say, 10% to 90%. In classical dispersion theory, the mixing zone grows in proportion to the square root of the mean distance traveled, or, equivalently, to the square root of the mean breakthrough time. In a multi-dimensional heterogeneous medium, especially at field scales, the size of the mixing zone grows almost linearly with mean distance or travel time. If an observed breakthrough curve is forced to fit the, clinical theory, the resulting effective dispersivity, instead of being constant, also increases almost linearly with the spatial or temporal scale of the problem. This occurs because the heterogeneity in flow properties creates a corresponding velocity distribution along the different flow pathways from the inlet to the outlet of the system. Mixing occurs mostly at the outlet, or wherever the fluid is sampled, rather than within the medium. In this paper, we consider the effects. of this behavior on radionuclide or other contaminant migration

  11. Dispersivity in heterogeneous permeable media

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1994-01-01

    When one fluid displaces another through a one-dimensional porous medium, the composition changes from pure displacing fluid at the inlet to pure displaced fluid some distance downstream. The distance over which an arbitrary percentage (typically 80%) of this change occurs is defined as the mixing zone length, which increases with increasing average distance traveled by the displacement front. Alternatively, for continuous injection, the mixing zone size can be determined from a breakthrough curve as the time required for the effluent displacing fluid concentration to change from, say, 10% to 90%. In classical dispersion theory, the mixing zone grows in proportion to the square root of the mean distance traveled, or, equivalently, to the square root of the mean breakthrough time. In a multi-dimensional heterogeneous medium, especially at field scales, the size of the mixing zone grows almost linearly with mean distance or travel time. If an observed breakthrough curve is forced to fit the classical theory, the resulting effective dispersivity, instead of being constant, also increases almost linearly with the spatial or temporal scale of the problem. This occurs because the heterogeneity in flow properties creates a corresponding velocity distribution along the different flow pathways from the inlet to the outlet of the system. Mixing occurs mostly at the outlet, or wherever the fluid is sampled, rather than within the medium. In this paper, we consider the effects of this behavior on radionuclide or other contaminant migration

  12. Heterogeneous Materials I and Heterogeneous Materials II

    International Nuclear Information System (INIS)

    Knowles, K M

    2004-01-01

    In these two volumes the author provides a comprehensive survey of the various mathematically-based models used in the research literature to predict the mechanical, thermal and electrical properties of hetereogeneous materials, i.e., materials containing two or more phases such as fibre-reinforced polymers, cast iron and porous ceramic kiln furniture. Volume I covers linear properties such as linear dielectric constant, effective electrical conductivity and elastic moduli, while Volume II covers nonlinear properties, fracture and atomistic and multiscale modelling. Where appropriate, particular attention is paid to the use of fractal geometry and percolation theory in describing the structure and properties of these materials. The books are advanced level texts reflecting the research interests of the author which will be of significant interest to research scientists working at the forefront of the areas covered by the books. Others working more generally in the field of materials science interested in comparing predictions of properties with experimental results may well find the mathematical level quite daunting initially, as it is apparent that the author assumes a level of mathematics consistent with that taught in final year undergraduate and graduate theoretical physics courses. However, for such readers it is well worth persevering because of the in-depth coverage to which the various models are subjected, and also because of the extensive reference lists at the back of both volumes which direct readers to the various source references in the scientific literature. Thus, for the wider materials science scientific community the two volumes will be a valuable library resource. While I would have liked to see more comparison with experimental data on both ideal and 'real' heterogeneous materials than is provided by the author and a discussion of how to model strong nonlinear current--voltage behaviour in systems such as zinc oxide varistors, my overall

  13. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  14. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  15. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  16. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  17. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  18. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  19. Spatial heterogeneity in utilities, equity and collective efficiency: the case of rural electrification and demand side management; Heterogeneite spatiale d'un service de reseau, equite et efficacite collective: la distribution rurale d'electricite et la maitrise de la demande

    Energy Technology Data Exchange (ETDEWEB)

    Nadaud, F

    2005-11-15

    This thesis has for object the evolution of the economic optimum in the electric industry under spatial equity constraint that present a strong spatial heterogeneity of its supply conditions. One analyses the evolution of the rural electrification regime in France both in terms of economic and social efficiency. We examine the rationality of extending the sectoral optimization under equity constraint to the rationalization of electricity end-uses in the heterogeneous space of rural electricity supply. To this question are given two responses. The firsts pertains to modify the incentives in the institutional regime of rural electrification so the MDE may be integrated in the strategies of rural electrification syndicates. One inspire from incentives mechanisms of the anglo-saxon DSM practice. The second is statistical zoning method of demand and distribution grid whose object is to localize action basins for large scale MDE projects. (author)

  20. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)