WorldWideScience

Sample records for lot distribution database

  1. Tradeoffs in distributed databases

    OpenAIRE

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  2. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  3. 9 CFR 381.191 - Distribution of inspected products to small lot buyers.

    Science.gov (United States)

    2010-01-01

    ... small lot buyers. 381.191 Section 381.191 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE...; Exportation; or Sale of Poultry or Poultry Products § 381.191 Distribution of inspected products to small lot... small lot buyers (such as small restaurants), distributors or jobbers may remove inspected and passed...

  4. Simultaneous Optimal Placement of Distributed Generation and Electric Vehicle Parking Lots Based on Probabilistic EV Model

    OpenAIRE

    M.H. Amini; M. Parsa Moghaddam

    2013-01-01

    High penetration of distributed generations and the increasing demand for using electric vehicles provide a lot of issues for the utilities. If these two effective elements of the future power system are used in an unscheduled manner, it may lead to the loss increment in distribution networks, dramatically. In this paper, the simultaneous allocation of distributed generations (DGs) and electric vehicles (EVs) parking lots has been studied in a radial distribution network. A distribution netwo...

  5. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  6. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  7. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  8. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  9. Optimal pricing and lot-sizing decisions under Weibull distribution deterioration and trade credit policy

    Directory of Open Access Journals (Sweden)

    Manna S.K.

    2008-01-01

    Full Text Available In this paper, we consider the problem of simultaneous determination of retail price and lot-size (RPLS under the assumption that the supplier offers a fixed credit period to the retailer. It is assumed that the item in stock deteriorates over time at a rate that follows a two-parameter Weibull distribution and that the price-dependent demand is represented by a constant-price-elasticity function of retail price. The RPLS decision model is developed and solved analytically. Results are illustrated with the help of a base example. Computational results show that the supplier earns more profits when the credit period is greater than the replenishment cycle length. Sensitivity analysis of the solution to changes in the value of input parameters of the base example is also discussed.

  10. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  11. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... is available. If data is unlabeled, then it is possible to generate keywords (in case of textual data) or key-patterns, as an informative representation of the obtained clusters. The methods are applied on simple artificial data sets, as well as collections of textual and medical data. In Danish: Denne ph...

  12. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  13. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  14. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  15. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  16. Associations with HIV testing in Uganda: an analysis of the Lot Quality Assurance Sampling database 2003-2012.

    Science.gov (United States)

    Jeffery, Caroline; Beckworth, Colin; Hadden, Wilbur C; Ouma, Joseph; Lwanga, Stephen K; Valadez, Joseph J

    2016-01-01

    Beginning in 2003, Uganda used Lot Quality Assurance Sampling (LQAS) to assist district managers collect and use data to improve their human immunodeficiency virus (HIV)/AIDS program. Uganda's LQAS-database (2003-2012) covers up to 73 of 112 districts. Our multidistrict analysis of the LQAS data-set at 2003-2004 and 2012 examined gender variation among adults who ever tested for HIV over time, and attributes associated with testing. Conditional logistic regression matched men and women by community with seven model effect variables. HIV testing prevalence rose from 14% (men) and 12% (women) in 2003-2004 to 62% (men) and 80% (women) in 2012. In 2003-2004, knowing the benefits of testing (Odds Ratio [OR] = 6.09, 95% CI = 3.01-12.35), knowing where to get tested (OR = 2.83, 95% CI = 1.44-5.56), and secondary education (OR = 3.04, 95% CI = 1.19-7.77) were significantly associated with HIV testing. By 2012, knowing the benefits of testing (OR = 3.63, 95% CI = 2.25-5.83), where to get tested (OR = 5.15, 95% CI = 3.26-8.14), primary education (OR = 2.01, 95% CI = 1.39-2.91), being female (OR = 3.03, 95% CI = 2.53-3.62), and being married (OR = 1.81, 95% CI = 1.17-2.8) were significantly associated with HIV testing. HIV testing prevalence in Uganda has increased dramatically, more for women than men. Our results concurred with other authors that education, knowledge of HIV, and marriage (women only) are associated with testing for HIV and suggest that couples testing is more prevalent than other authors.

  17. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  18. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  19. The design of distributed database system for HIRFL

    International Nuclear Information System (INIS)

    Wang Hong; Huang Xinmin

    2004-01-01

    This paper is focused on a kind of distributed database system used in HIRFL distributed control system. The database of this distributed database system is established by SQL Server 2000, and its application system adopts the Client/Server model. Visual C ++ is used to develop the applications, and the application uses ODBC to access the database. (authors)

  20. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  1. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  2. Optimistic protocol for partitioned distributed database systems

    International Nuclear Information System (INIS)

    Davidson, S.B.

    1982-01-01

    A protocol for transaction processing during partition failures is presented which guarantees mutual consistency between copies of data-items after repair is completed. The protocol is optimistic in that transactions are processed without restrictions during the failure; conflicts are detected at repair time using a precedence graph and are resolved by backing out transactions according to some backout strategy. The protocol is then evaluated using simulation and probabilistic modeling. In the simulation, several parameters are varied such as the number of transactions processed in a group, the type of transactions processed, the number of data-items present in the database, and the distribution of references to data-items. The simulation also uses different backout strategies. From these results we note conditions under which the protocol performs well, i.e., conditions under which the protocol backs out a small percentage of the transaction run. A probabilistic model is developed to estimate the expected number of transactions backed out using most of the above database and transaction parameters, and is shown to agree with simulation results. Suggestions are then made on how to improve the performance of the protocol. Insights gained from the simulation and probabilistic modeling are used to develop a backout strategy which takes into account individual transaction costs and attempts to minimize total backout cost. Although the problem of choosing transactions to minimize total backout cost is, in general, NP-complete, the backout strategy is efficient and produces very good results

  3. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  4. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  5. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  6. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  7. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  8. A Methodology for Distributing the Corporate Database.

    Science.gov (United States)

    McFadden, Fred R.

    The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…

  9. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  10. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  11. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  12. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  13. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  14. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  15. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  18. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  19. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  20. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  1. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  2. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  3. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  4. Income distribution patterns from a complete social security database

    Science.gov (United States)

    Derzsy, N.; Néda, Z.; Santos, M. A.

    2012-11-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.

  5. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  6. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  7. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  8. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  9. Wide-area-distributed storage system for a multimedia database

    Science.gov (United States)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  10. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  11. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  12. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  13. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  14. Multi-layer distributed storage of LHD plasma diagnostic database

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Kojima, Mamoru; Ohsuna, Masaki; Nonomura, Miki; Imazu, Setsuo; Nagayama, Yoshio

    2006-01-01

    At the end of LHD experimental campaign in 2003, the amount of whole plasma diagnostics raw data had reached 3.16 GB in a long-pulse experiment. This is a new world record in fusion plasma experiments, far beyond the previous value of 1.5 GB/shot. The total size of the LHD diagnostic data is about 21.6 TB for the whole six years of experiments, and it continues to grow at an increasing rate. The LHD diagnostic database and storage system, i.e. the LABCOM system, has a completely distributed architecture to be sufficiently flexible and easily expandable to maintain integrity of the total amount of data. It has three categories of the storage layer: OODBMS volumes in data acquisition servers, RAID servers, and mass storage systems, such as MO jukeboxes and DVD-R changers. These are equally accessible through the network. By data migration between them, they can be considered a virtual OODB extension area. Their data contents have been listed in a 'facilitator' PostgreSQL RDBMS, which contains about 6.2 million entries, and informs the optimized priority to clients requesting data. Using the 'glib' compression for all of the binary data and applying the three-tier application model for the OODB data transfer/retrieval, an optimized OODB read-out rate of 1.7 MB/s and effective client access speed of 3-25 MB/s have been achieved. As a result, the LABCOM data system has succeeded in combination of the use of RDBMS, OODBMS, RAID, and MSS to enable a virtual and always expandable storage volume, simultaneously with rapid data access. (author)

  15. A Database for Decision-Making in Training and Distributed Learning Technology

    National Research Council Canada - National Science Library

    Stouffer, Virginia

    1998-01-01

    .... A framework for incorporating data about distributed learning courseware into the existing training database was devised and a plan for a national electronic courseware redistribution network was recommended...

  16. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  17. Green Lot-Sizing

    NARCIS (Netherlands)

    M. Retel Helmrich (Mathijn Jan)

    2013-01-01

    textabstractThe lot-sizing problem concerns a manufacturer that needs to solve a production planning problem. The producer must decide at which points in time to set up a production process, and when he/she does, how much to produce. There is a trade-off between inventory costs and costs associated

  18. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  19. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  20. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  1. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  2. Present and future status of distributed database for nuclear materials (Data-Free-Way)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Xu, Yibin; Kaji, Yoshiyuki; Tsukada, Takashi

    2004-01-01

    Data-Free-Way (DFW) is a distributed database for nuclear materials. DFW has been developed by three organizations such as National Institute for Materials Science (NIMS), Japan Atomic Energy Research Institute (JAERI) and Japan Nuclear Cycle Development Institute (JNC) since 1990. Each organization constructs each materials database in the strongest field and the member of three organizations can use these databases by internet. Construction of DFW, stored data, outline of knowledge data system, data manufacturing of knowledge note, activities of three organizations are described. On NIMS, nuclear reaction database for materials are explained. On JAERI, data analysis using IASCC data in JMPD is contained. Main database of JNC is experimental database of coexistence of engineering ceramics in liquid sodium at high temperature' and 'Tensile test database of irradiated 304 stainless steel' and 'Technical information database'. (S.Y.)

  3. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  4. The response time distribution in a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1996-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  5. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  6. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  7. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  8. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  9. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  10. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  11. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  12. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  13. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  14. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  15. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  16. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  17. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  18. Site initialization, recovery, and back-up in a distributed database system

    International Nuclear Information System (INIS)

    Attar, R.; Bernstein, P.A.; Goodman, N.

    1982-01-01

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all inter-site synchronization

  19. Application of new type of distributed multimedia databases to networked electronic museum

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  20. Exploring lot-to-lot variation in spoilage bacterial communities on commercial modified atmosphere packaged beef.

    Science.gov (United States)

    Säde, Elina; Penttinen, Katri; Björkroth, Johanna; Hultman, Jenni

    2017-04-01

    Understanding the factors influencing meat bacterial communities is important as these communities are largely responsible for meat spoilage. The composition and structure of a bacterial community on a high-O 2 modified-atmosphere packaged beef product were examined after packaging, on the use-by date and two days after, to determine whether the communities at each stage were similar to those in samples taken from different production lots. Furthermore, we examined whether the taxa associated with product spoilage were distributed across production lots. Results from 16S rRNA amplicon sequencing showed that while the early samples harbored distinct bacterial communities, after 8-12 days storage at 6 °C the communities were similar to those in samples from different lots, comprising mainly of common meat spoilage bacteria Carnobacterium spp., Brochothrix spp., Leuconostoc spp. and Lactococcus spp. Interestingly, abundant operational taxonomic units associated with product spoilage were shared between the production lots, suggesting that the bacteria enable to spoil the product were constant contaminants in the production chain. A characteristic succession pattern and the distribution of common spoilage bacteria between lots suggest that both the packaging type and the initial community structure influenced the development of the spoilage bacterial community. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  2. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  3. Practical private database queries based on a quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Jakobi, Markus; Simon, Christoph; Gisin, Nicolas; Bancal, Jean-Daniel; Branciard, Cyril; Walenta, Nino; Zbinden, Hugo

    2011-01-01

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions in order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.

  4. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    Science.gov (United States)

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  5. Data Mining in Distributed Database of the First Egyptian Thermal Research Reactor (ETRR-1)

    International Nuclear Information System (INIS)

    Abo Elez, R.H.; Ayad, N.M.A.; Ghuname, A.A.A.

    2006-01-01

    Distributed database (DDB)technology application systems are growing up to cover many fields an domains, and at different levels. the aim of this paper is to shade some lights on applying the new technology of distributed database on the ETRR-1 operation data logged by the data acquisition system (DACQUS)and one can extract a useful knowledge. data mining with scientific methods and specialize tools is used to support the extraction of useful knowledge from the rapidly growing volumes of data . there are many shapes and forms for data mining methods. predictive methods furnish models capable of anticipating the future behavior of quantitative or qualitative database variables. when the relationship between the dependent an independent variables is nearly liner, linear regression method is the appropriate data mining strategy. so, multiple linear regression models have been applied to a set of data samples of the ETRR-1 operation data, using least square method. the results show an accurate analysis of the multiple linear regression models as applied to the ETRR-1 operation data

  6. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    Science.gov (United States)

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  7. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database

    International Nuclear Information System (INIS)

    Ochs, Michael; Saito, Yoshihiko; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Yui, Mikazu

    2007-03-01

    Japan Atomic Energy Agency (JAEA) has developed the sorption database (JNC-SDB) for bentonite and rocks in order to assess the retardation property of important radioactive elements in natural and engineered barriers in the H12 report. The database includes distribution coefficient (K d ) of important radionuclides. The K d values in the SDB are about 20,000 data. The SDB includes a great variety of K d and additional key information from many different literatures. Accordingly, the classification guideline and classification system were developed in order to evaluate the reliability of each K d value (Th, Pa, U, Np, Pu, Am, Cm, Cs, Ra, Se, Tc on bentonite). The reliability of 3740 K d values are evaluated and categorized. (author)

  8. Assumptions of acceptance sampling and the implications for lot contamination: Escherichia coli O157 in lots of Australian manufacturing beef.

    Science.gov (United States)

    Kiermeier, Andreas; Mellor, Glen; Barlow, Robert; Jenson, Ian

    2011-04-01

    The aims of this work were to determine the distribution and concentration of Escherichia coli O157 in lots of beef destined for grinding (manufacturing beef) that failed to meet Australian requirements for export, to use these data to better understand the performance of sampling plans based on the binomial distribution, and to consider alternative approaches for evaluating sampling plans. For each of five lots from which E. coli O157 had been detected, 900 samples from the external carcass surface were tested. E. coli O157 was not detected in three lots, whereas in two lots E. coli O157 was detected in 2 and 74 samples. For lots in which E. coli O157 was not detected in the present study, the E. coli O157 level was estimated to be contaminated carton, the total number of E. coli O157 cells was estimated to be 813. In the two lots in which E. coli O157 was detected, the pathogen was detected in 1 of 12 and 2 of 12 cartons. The use of acceptance sampling plans based on a binomial distribution can provide a falsely optimistic view of the value of sampling as a control measure when applied to assessment of E. coli O157 contamination in manufacturing beef. Alternative approaches to understanding sampling plans, which do not assume homogeneous contamination throughout the lot, appear more realistic. These results indicate that despite the application of stringent sampling plans, sampling and testing approaches are inefficient for controlling microbiological quality.

  9. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  10. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  11. The response-time distribution in a real-time database with optimistic concurrency control and constant execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  12. The response-time distribution in a real-time database with optimistic concurrency control and exponential execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction takes an exponential execution

  13. The mining of toxin-like polypeptides from EST database by single residue distribution analysis.

    Science.gov (United States)

    Kozlov, Sergey; Grishin, Eugene

    2011-01-31

    Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  14. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  15. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  16. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  17. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  18. 21 CFR 203.38 - Sample lot or control numbers; labeling of sample units.

    Science.gov (United States)

    2010-04-01

    ... numbers; labeling of sample units. (a) Lot or control number required on drug sample labeling and sample... identifying lot or control number that will permit the tracking of the distribution of each drug sample unit... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Sample lot or control numbers; labeling of sample...

  19. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  20. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  1. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  2. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  3. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  4. Web geoprocessing services on GML with a fast XML database ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  5. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  6. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  7. An optimized approach for simultaneous horizontal data fragmentation and allocation in Distributed Database Systems (DDBSs).

    Science.gov (United States)

    Amer, Ali A; Sewisy, Adel A; Elgendy, Taha M A

    2017-12-01

    With the substantial ever-upgrading advancement in data and information management field, Distributed Database System (DDBS) is still proven to be the most growingly-demanded tool to handle the accompanied constantly-piled volumes of data. However, the efficiency and adequacy of DDBS is profoundly correlated with the reliability and precision of the process in which DDBS is set to be designed. As for DDBS design, thus, several strategies have been developed, in literature, to be used in purpose of promoting DDBS performance. Off these strategies, data fragmentation, data allocation and replication, and sites clustering are the most immensely-used efficacious techniques that otherwise DDBS design and rendering would be prohibitively expensive. On one hand, an accurate well-architected data fragmentation and allocation is bound to incredibly increase data locality and promote the overall DDBS throughputs. On the other hand, finding a practical sites clustering process is set to contribute remarkably in reducing the overall Transmission Costs (TC). Consequently, consolidating all these strategies into one single work is going to undoubtedly satisfy a massive growth in DDBS influence. In this paper, therefore, an optimized heuristic horizontal fragmentation and allocation approach is meticulously developed. All the drawn-above strategies are elegantly combined into a single effective approach so as to an influential solution for DDBS productivity promotion is set to be markedly fulfilled. Most importantly, an internal and external evaluations are extensively illustrated. Obviously, findings of conducted experiments have maximally been recorded to be in favor of DDBS performance betterment.

  8. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  9. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  10. A Distributed Database System for Developing Ontological and Lexical Resources in Harmony

    NARCIS (Netherlands)

    Horák, A.; Vossen, P.T.J.M.; Rambousek, A.; Gelbukh, A.

    2010-01-01

    In this article, we present the basic ideas of creating a new information-rich lexical database of Dutch, called Cornetto, that is interconnected with corresponding English synsets and a formal ontology. The Cornetto database is based on two existing electronic dictionaries - the Referentie Bestand

  11. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  12. 7 CFR 46.20 - Lot numbers.

    Science.gov (United States)

    2010-01-01

    ... and entered on all sales tickets identifying and segregating the sales from the various shipments on hand. The lot number shall be entered on the sales tickets by the salesmen at the time of sale or by the produce dispatcher, and not by bookkeepers or others after the sales have been made. No lot number...

  13. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  14. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  15. The online database MaarjAM reveals global and ecosystemic distribution patterns in arbuscular mycorrhizal fungi (Glomeromycota).

    Science.gov (United States)

    Opik, M; Vanatoa, A; Vanatoa, E; Moora, M; Davison, J; Kalwij, J M; Reier, U; Zobel, M

    2010-10-01

    • Here, we describe a new database, MaarjAM, that summarizes publicly available Glomeromycota DNA sequence data and associated metadata. The goal of the database is to facilitate the description of distribution and richness patterns in this group of fungi. • Small subunit (SSU) rRNA gene sequences and available metadata were collated from all suitable taxonomic and ecological publications. These data have been made accessible in an open-access database (http://maarjam.botany.ut.ee). • Two hundred and eighty-two SSU rRNA gene virtual taxa (VT) were described based on a comprehensive phylogenetic analysis of all collated Glomeromycota sequences. Two-thirds of VT showed limited distribution ranges, occurring in single current or historic continents or climatic zones. Those VT that associated with a taxonomically wide range of host plants also tended to have a wide geographical distribution, and vice versa. No relationships were detected between VT richness and latitude, elevation or vascular plant richness. • The collated Glomeromycota molecular diversity data suggest limited distribution ranges in most Glomeromycota taxa and a positive relationship between the width of a taxon's geographical range and its host taxonomic range. Inconsistencies between molecular and traditional taxonomy of Glomeromycota, and shortage of data from major continents and ecosystems, are highlighted.

  16. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  17. 7 CFR 989.104 - Lot.

    Science.gov (United States)

    2010-01-01

    ... Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN... inspection after reconditioning (such as sorting or drying) and whose original lot identity is no longer...

  18. Reactionary responses to the Bad Lot Objection.

    Science.gov (United States)

    Dellsén, Finnur

    2017-02-01

    As it is standardly conceived, Inference to the Best Explanation (IBE) is a form of ampliative inference in which one infers a hypothesis because it provides a better potential explanation of one's evidence than any other available, competing explanatory hypothesis. Bas van Fraassen famously objected to IBE thus formulated that we may have no reason to think that any of the available, competing explanatory hypotheses are true. While revisionary responses to the Bad Lot Objection concede that IBE needs to be reformulated in light of this problem, reactionary responses argue that the Bad Lot Objection is fallacious, incoherent, or misguided. This paper shows that the most influential reactionary responses to the Bad Lot Objection do nothing to undermine the original objection. This strongly suggests that proponents of IBE should focus their efforts on revisionary responses, i.e. on finding a more sophisticated characterization of IBE for which the Bad Lot Objection loses its bite. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  20. La vallée du Lot en Lot-et-Garonne : inventaire topographique

    Directory of Open Access Journals (Sweden)

    Hélène Mousset

    2012-04-01

    Full Text Available La remise en navigation du Lot est à l’origine du projet d’inventaire du patrimoine de la vallée dans sa partie lot-et-garonnaise1. L’ampleur du territoire - 12 cantons riverains2 - et de la perspective historique - du Moyen Age à nos jours - imposaient d’emblée rigueur et objectifs clairs : méthode raisonnée de l’inventaire topographique pour un bilan homogène du patrimoine, fondée sur une enquête systématique du paysage bâti et du mobilier public, sans a priori. Le premier résultat est un catalogue patrimonial sous forme de bases de données3. Mais ce corpus documentaire hétérogène et touffu n’est pas une addition de monographies : il peut et doit être interrogé et exploité comme un ensemble apportant une connaissance renouvelée du territoire. Sans prétendre réaliser une synthèse de la totalité des données pour l’ensemble de la vallée4, les exemples qui vont suivre illustreront la façon dont le travail d’inventaire apporte réponses et nouvelles interrogations, concernant notamment l’occupation du sol, les paysages et l’architecture de cette partie de l’Agenais. Recherche de l’empreinte d’une époque déterminée, examen de la permanence des paysages bâtis sur la longue durée et observation des traces de mutations et flexions historiques, sont un triple niveau d’analyse attendu dans le cadre d’un inventaire sur un vaste territoire rural.The plan to reintroduce navigation on the Lot in the part of the river that flows through the Lot-et-Garonne department was at the origins of a survey of the heritage along the course of the river. The geographical scope of the survey was large (12 cantons along the river and the period covered by the heritage extends from the Middle ages up to the present day. The variety of buildings to be covered required a rigorous approach and clear objectives. The method of the topographical inventory was tailored to the production of a homogenous heritage audit

  1. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  2. Analysis of Java Distributed Architectures in Designing and Implementing a Client/Server Database System

    National Research Council Canada - National Science Library

    Akin, Ramis

    1998-01-01

    .... Information is scattered throughout organizations and must be easily accessible. A new solution is needed for effective and efficient management of data in today's distributed client/server environment...

  3. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    Science.gov (United States)

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  4. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  5. A distributed atomic physics database and modeling system for plasma spectroscopy

    International Nuclear Information System (INIS)

    Nash, J.K.; Liedahl, D.; Chen, M.H.; Iglesias, C.A.; Lee, R.W.; Salter, J.M.

    1995-08-01

    We are undertaking to develop a set of computational capabilities which will facilitate the access, manipulation, and understanding of atomic data in calculations of x-ray spectral modeling. In this present limited description we will emphasize the objectives for this work, the design philosophy, and aspects of the atomic database, as a more complete description of this work is available. The project is referred to as the Plasma Spectroscopy Initiative; the computing environment is called PSI, or the ''PSI shell'' since the primary interface resembles a UNIX shell window. The working group consists of researchers in the fields of x-ray plasma spectroscopy, atomic physics, plasma diagnostics, line shape theory, astrophysics, and computer science. To date, our focus has been to develop the software foundations, including the atomic physics database, and to apply the existing capabilities to a range of working problems. These problems have been chosen in part to exercise the overall design and implementation of the shell. For successful implementation the final design must have great flexibility since our goal is not simply to satisfy our interests but to vide a tool of general use to the community

  6. Mars Global Digital Dune Database (MGD3): Global dune distribution and wind pattern observations

    Science.gov (United States)

    Hayward, Rosalyn K.; Fenton, Lori; Titus, Timothy N.

    2014-01-01

    The Mars Global Digital Dune Database (MGD3) is complete and now extends from 90°N to 90°S latitude. The recently released south pole (SP) portion (MC-30) of MGD3 adds ∼60,000 km2 of medium to large-size dark dune fields and ∼15,000 km2 of sand deposits and smaller dune fields to the previously released equatorial (EQ, ∼70,000 km2), and north pole (NP, ∼845,000 km2) portions of the database, bringing the global total to ∼975,000 km2. Nearly all NP dunes are part of large sand seas, while the majority of EQ and SP dune fields are individual dune fields located in craters. Despite the differences between Mars and Earth, their dune and dune field morphologies are strikingly similar. Bullseye dune fields, named for their concentric ring pattern, are the exception, possibly owing their distinctive appearance to winds that are unique to the crater environment. Ground-based wind directions are derived from slipface (SF) orientation and dune centroid azimuth (DCA), a measure of the relative location of a dune field inside a crater. SF and DCA often preserve evidence of different wind directions, suggesting the importance of local, topographically influenced winds. In general however, ground-based wind directions are broadly consistent with expected global patterns, such as polar easterlies. Intriguingly, between 40°S and 80°S latitude both SF and DCA preserve their strongest, though different, dominant wind direction, with transport toward the west and east for SF-derived winds and toward the north and west for DCA-derived winds.

  7. Geographical distribution of centenarians in Colombia: an analysis of three databases

    Directory of Open Access Journals (Sweden)

    Diego Rosselli

    2017-07-01

    Conclusions: Although the results are consistent with the number and geographical distribution of centenarians, some errors may be found in the date of birth stated in the records, which is the basis for estimating age in the three sources. Other factors potentially involved in the results may be physical activity, family and community support, low stress and healthy diet in these regions.

  8. The economic lot size and relevant costs

    NARCIS (Netherlands)

    Corbeij, M.H.; Jansen, R.A.; Grübström, R.W.; Hinterhuber, H.H.; Lundquist, J.

    1993-01-01

    In many accounting textbooks it is strongly argued that decisions should always be evaluated on relevant costs; that is variable costs and opportunity costs. Surprisingly, when it comes to Economic Order Quantities or Lot Sizes, some textbooks appear to be less straightforward. The question whether

  9. 7 CFR 993.104 - Lot.

    Science.gov (United States)

    2010-01-01

    ... means any quantity of prunes delivered by one producer or one dehydrator to a handler on which... purposes of §§ 993.50 and 993.150 means: (1) With respect to in-line inspection either (i) the aggregate... identification (e.g., brand) if in consumer packages, and offered for inspection as a lot; or (ii) prunes...

  10. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  11. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  12. Pediatric Vital Sign Distribution Derived From a Multi-Centered Emergency Department Database

    Directory of Open Access Journals (Sweden)

    Robert J. Sepanski

    2018-03-01

    Full Text Available BackgroundWe hypothesized that current vital sign thresholds used in pediatric emergency department (ED screening tools do not reflect observed vital signs in this population. We analyzed a large multi-centered database to develop heart rate (HR and respiratory rate centile rankings and z-scores that could be incorporated into electronic health record ED screening tools and we compared our derived centiles to previously published centiles and Pediatric Advanced Life Support (PALS vital sign thresholds.MethodsInitial HR and respiratory rate data entered into the Cerner™ electronic health record at 169 participating hospitals’ ED over 5 years (2009 through 2013 as part of routine care were analyzed. Analysis was restricted to non-admitted children (0 to <18 years. Centile curves and z-scores were developed using generalized additive models for location, scale, and shape. A split-sample validation using two-thirds of the sample was compared with the remaining one-third. Centile values were compared with results from previous studies and guidelines.ResultsHR and RR centiles and z-scores were determined from ~1.2 million records. Empirical 95th centiles for HR and respiratory rate were higher than previously published results and both deviated from PALS guideline recommendations.ConclusionHeart and respiratory rate centiles derived from a large real-world non-hospitalized ED pediatric population can inform the modification of electronic and paper-based screening tools to stratify children by the degree of deviation from normal for age rather than dichotomizing children into groups having “normal” versus “abnormal” vital signs. Furthermore, these centiles also may be useful in paper-based screening tools and bedside alarm limits for children in areas other than the ED and may establish improved alarm limits for bedside monitors.

  13. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (4)

    International Nuclear Information System (INIS)

    Suyama, Tadahiro; Tachi, Yukio; Ganter, Charlotte; Kunze, Susanne; Ochs, Michael

    2011-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite, mudstone, granite and Fe-oxide/hydroxide, Al-oxide/hydroxide has been already evaluated. These QA information has been made available to access through the web-based JAEA-SDB since March, 2009. In this report, the QA/classification of selected entries in the JAEA-SDB, focusing on key radionuclides (Th, Np, Am, Se and Cs) sorption on tuff existing widely in geological environment, was done following the approach/guideline defined in our previous report. As a result, the reliability of 560 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  14. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (3)

    International Nuclear Information System (INIS)

    Ochs, Michael; Kunze, Susanne; Suyama, Tadahiro; Tachi, Yukio; Yui, Mikazu

    2010-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite and mudstone system has been already evaluated. To use these QA information, the new web-based JAEA-SDB was published in March, 2009. In this report, the QA/classification of selected entries for key radionuclides (Th, Np, Am, Se and Cs) in the JAEA-SDB was done following the approach/guideline defined in our previous report focusing granite rocks which are related to reference systems in H12 PA and possible applications in the context of URL activities, and Fe-oxide/hydroxide, Al-oxide/hydroxide existing widely in geological environment. As a result, the reliability of 1,373 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  15. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  16. USBombus, a database of contemporary survey data for North American Bumble Bees (Hymenoptera, Apidae, Bombus) distributed in the United States.

    Science.gov (United States)

    Koch, Jonathan B; Lozier, Jeffrey; Strange, James P; Ikerd, Harold; Griswold, Terry; Cordes, Nils; Solter, Leellen; Stewart, Isaac; Cameron, Sydney A

    2015-01-01

    Bumble bees (Hymenoptera: Apidae, Bombus) are pollinators of wild and economically important flowering plants. However, at least four bumble bee species have declined significantly in population abundance and geographic range relative to historic estimates, and one species is possibly extinct. While a wealth of historic data is now available for many of the North American species found to be in decline in online databases, systematic survey data of stable species is still not publically available. The availability of contemporary survey data is critically important for the future monitoring of wild bumble bee populations. Without such data, the ability to ascertain the conservation status of bumble bees in the United States will remain challenging. This paper describes USBombus, a large database that represents the outcomes of one of the largest standardized surveys of bumble bee pollinators (Hymenoptera, Apidae, Bombus) globally. The motivation to collect live bumble bees across the United States was to examine the decline and conservation status of Bombus affinis, B. occidentalis, B. pensylvanicus, and B. terricola. Prior to our national survey of bumble bees in the United States from 2007 to 2010, there have only been regional accounts of bumble bee abundance and richness. In addition to surveying declining bumble bees, we also collected and documented a diversity of co-occuring bumble bees. However we have not yet completely reported their distribution and diversity onto a public online platform. Now, for the first time, we report the geographic distribution of bumble bees reported to be in decline (Cameron et al. 2011), as well as bumble bees that appeared to be stable on a large geographic scale in the United States (not in decline). In this database we report a total of 17,930 adult occurrence records across 397 locations and 39 species of Bombus detected in our national survey. We summarize their abundance and distribution across the United States and

  17. Where the bugs are: analyzing distributions of bacterial phyla by descriptor keyword search in the nucleotide database.

    Science.gov (United States)

    Squartini, Andrea

    2011-07-26

    The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.

  18. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    International Nuclear Information System (INIS)

    Nagasaki, S.

    2013-01-01

    Based on the sorption distribution coefficients (K d ) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K d values of Cs onto Mizunami granite carried out by JAEA with the K d values predicted by the model, the effect of the ionic strength on the K d values of Cs onto granite was evaluated. It was found that K d values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10 -2 to 5 x 10 -1 mol/dm 3 . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  19. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    Energy Technology Data Exchange (ETDEWEB)

    Nagasaki, S., E-mail: nagasas@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

    2013-07-01

    Based on the sorption distribution coefficients (K{sub d}) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K{sub d} values of Cs onto Mizunami granite carried out by JAEA with the K{sub d} values predicted by the model, the effect of the ionic strength on the K{sub d} values of Cs onto granite was evaluated. It was found that K{sub d} values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10{sup -2} to 5 x 10{sup -1} mol/dm{sup 3} . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  20. Global spatiotemporal distribution of soil respiration modeled using a global database

    Science.gov (United States)

    Hashimoto, S.; Carvalhais, N.; Ito, A.; Migliavacca, M.; Nishina, K.; Reichstein, M.

    2015-07-01

    The flux of carbon dioxide from the soil to the atmosphere (soil respiration) is one of the major fluxes in the global carbon cycle. At present, the accumulated field observation data cover a wide range of geographical locations and climate conditions. However, there are still large uncertainties in the magnitude and spatiotemporal variation of global soil respiration. Using a global soil respiration data set, we developed a climate-driven model of soil respiration by modifying and updating Raich's model, and the global spatiotemporal distribution of soil respiration was examined using this model. The model was applied at a spatial resolution of 0.5°and a monthly time step. Soil respiration was divided into the heterotrophic and autotrophic components of respiration using an empirical model. The estimated mean annual global soil respiration was 91 Pg C yr-1 (between 1965 and 2012; Monte Carlo 95 % confidence interval: 87-95 Pg C yr-1) and increased at the rate of 0.09 Pg C yr-2. The contribution of soil respiration from boreal regions to the total increase in global soil respiration was on the same order of magnitude as that of tropical and temperate regions, despite a lower absolute magnitude of soil respiration in boreal regions. The estimated annual global heterotrophic respiration and global autotrophic respiration were 51 and 40 Pg C yr-1, respectively. The global soil respiration responded to the increase in air temperature at the rate of 3.3 Pg C yr-1 °C-1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. The estimates are based on a semi-empirical model parameterized with over one thousand data points. Our analysis indicates that the climate controls on soil respiration may translate into an increasing trend in global soil respiration and our analysis emphasizes the relevance of the soil carbon flux from soil to

  1. Military Observer Mission Ecuador-Peru (MOMEP) Doing a Lot with a Little.

    Science.gov (United States)

    1997-06-01

    IPS), URL: <htttp://web.maxwell.syr.edu.nativew...aphy/latinam/ ecuador /borderl6.html>, accessed 10 November 1996, pp. 1-2. 蔵 "Evacuees in Loja Number...OBSERVER MISSION ECUADOR -PERU (MOMEP) DOING A LOT WITH A LITTLE BY LIEUTENANT COLONEL KEVIN M. HIGGINS United States Army DISTRIBUTION STATEMENT A...MISSION ECUADOR -PERU (MOMEP) Doing A Lot With a Little by Lieutenant Colonel Kevin M. Higgins United States Army Naval Postgraduate School Special

  2. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  3. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    Science.gov (United States)

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  4. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  5. Evaluation of Radon Pollution in Underground Parking Lots by Discomfort Index

    Directory of Open Access Journals (Sweden)

    AH Bu-Olayan

    2016-06-01

    Full Text Available Introduction Recent studies of public underground parking lots showed the influence of radon concentration and the probable discomfort caused by parking cars. Materials and Methods Radon concentration was measured in semi-closed public parking lots in the six governorates of Kuwait, using Durridge RAD7radon detector (USA. Results The peak radon concentration in the parking lots of Kuwait governorates was relatively higher during winter (63.15Bq/m3 compared to summer (41.73 Bq/m3. Radon in the evaluated parking lots revealed a mean annual absorbed dose (DRn: 0.02mSv/y and annual effective dose (HE: 0.06mSv/y.  Conclusion This study validated the influence of relative humidity and temperature as the major components of discomfort index (DI. The mean annual absorbed and effective dose  of radon in the evaluated parking lots were found below the permissible limits. However, high radon DRn and HE were reported when the assessment included the parking lots, the surrounding residential apartments, and office premises. Furthermore, the time-series analysis indicated significant variations of the seasonal and site-wise distribution of radon concentrations in the indoor evaluated parking lots of the six Kuwait governorates

  6. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  7. Metal concentrations from permeable pavement parking lot in Edison, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements...

  8. 7 CFR 983.52 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...

  9. 40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.

    Science.gov (United States)

    2010-07-01

    ... six (6) percent for unpaved road surfaces or eight (8) percent for unpaved parking lot surfaces as... calculating percent cover.) (iii) Vegetative Density Factor. Cut a single, representative piece of vegetation... that are not covered by any piece of the vegetation. To calculate percent vegetative density, use...

  10. New methodology for dynamic lot dispatching

    Science.gov (United States)

    Tai, Wei-Herng; Wang, Jiann-Kwang; Lin, Kuo-Cheng; Hsu, Yi-Chin

    1994-09-01

    This paper presents a new dynamic dispatching rule to improve delivery. The dynamic dispatching rule named `SLACK and OTD (on time delivery)' is developed for focusing on due date and target cycle time under the environment of IC manufacturing. This idea uses traditional SLACK policy to control long term due date and new OTD policy to reflect the short term stage queue time. Through the fuzzy theory, these two policies are combined as the dispatching controller to define the lot priority in the entire production line. Besides, the system would automatically update the lot priority according to the current line situation. Since the wafer dispatching used to be controlled by critical ratio that indicates the low customer satisfaction. And the overall slack time in the front end of the process is greater compared to that in the rear end of the process which reveals that the machines in the rear end are overloaded by rush orders. When SLACK and OTD are used the due date control has been gradually improved. The wafer with either a long stage queue time or urgent due date will be pushed through the overall production line instead of jammed in the front end. A demand pull system is also developed to satisfy not only due date but also the quantity of monthly demand. The SLACK and OTD rule has been implemented in Taiwan Semiconductor Manufacturing Company for eight months with beneficial results. In order to clearly monitor the SLACK and OTD policy, a method called box chart is generated to simulate the entire production system. From the box chart, we can not only monitor the result of decision policy but display the production situation on the density figure. The production cycle time and delivery situation can also be investigated.

  11. A lot to look forward to

    CERN Multimedia

    2013-01-01

    CERN moves from momentous year to momentous year, and although 2013 will be very different for us than 2012, there is still a lot to look forward to. As I write, the proton-lead run is just getting under way, giving the LHC experiments a new kind of data to investigate. But the run will be short, and our main activity this year will be the start of the LHC’s first long shutdown.   This is the first year I can remember in which all of CERN’s accelerators will be off. The reason is that there is much to be done: the older machines need maintenance, and the LHC has to be prepared for higher energy running. That involves opening up the interconnections between each of the machine’s 1,695 main magnet cryostats, consolidating all of the 10,170 splices carrying current to the main dipole and quadrupole windings, and a range of other work to improve the machine. The CERN accelerator complex will start to come back to life in 2014, and it’s fair to say that when...

  12. A Methodolgy, Based on Analytical Modeling, for the Design of Parallel and Distributed Architectures for Relational Database Query Processors.

    Science.gov (United States)

    1987-12-01

    Application Programs Intelligent Disk Database Controller Manangement System Operating System Host .1’ I% Figure 2. Intelligent Disk Controller Application...8217. /- - • Database Control -% Manangement System Disk Data Controller Application Programs Operating Host I"" Figure 5. Processor-Per- Head data. Therefore, the...However. these ad- ditional properties have been proven in classical set and relation theory [75]. These additional properties are described here

  13. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  14. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes with distribution in Mexico

    Directory of Open Access Journals (Sweden)

    Gonzalo Pinilla-Buitrago

    2014-06-01

    Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first

  15. Lot No. 1 of Frit 202 for DWPF cold runs

    International Nuclear Information System (INIS)

    Schumacher, R.F.

    1993-01-01

    This report was prepared at the end of 1992 and summarizes the evaluation of the first lot sample of DWPF Frit 202 from Cataphote Inc. Publication of this report was delayed until the results from the carbon analyses could be included. To avoid confusion the frit specifications presented in this report were those available at the end of 1992. The specifications were slightly modified early in 1993. The frit was received and evaluated for moisture, particle size distribution, organic-inorganic carbon and chemical composition. Moisture content and particle size distribution were determined on a representative sample at SRTC. These properties were within the DWPF specifications for Frit 202. A representative sample was submitted to Corning Engineering Laboratory Services for chemical analyses. The sample was split and two dissolutions prepared. Each dissolution was analyzed on two separate days. The results indicate that there is a high probability (>95%) that the silica content of this frit is below the specification limit of 77.0 ± 1.0 wt %. The average of the four analyzed values was 75.1 wt % with a standard deviation of 0.28 wt %. All other oxides were within the elliptical two sigma limits. Control standard frit samples were submitted and analyzed at the same time and the results were very similar to previous analyses of these materials

  16. 7 CFR 983.152 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework procedure for aflatoxin. If inshell rework is selected as a remedy to meet the aflatoxin regulations of this...

  17. 7 CFR 33.7 - Less than carload lot.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...

  18. Tactical Production and Lot Size Planning with Lifetime Constraints

    DEFF Research Database (Denmark)

    Raiconi, Andrea; Pahl, Julia; Gentili, Monica

    2017-01-01

    In this work, we face a variant of the capacitated lot sizing problem. This is a classical problem addressing the issue of aggregating lot sizes for a finite number of discrete periodic demands that need to be satisfied, thus setting up production resources and eventually creating inventories...

  19. Column-oriented database management systems

    OpenAIRE

    Možina, David

    2013-01-01

    In the following thesis I will present column-oriented database. Among other things, I will answer on a question why there is a need for a column-oriented database. In recent years there have been a lot of attention regarding a column-oriented database, even if the existence of a columnar database management systems dates back in the early seventies of the last century. I will compare both systems for a database management – a colum-oriented database system and a row-oriented database system ...

  20. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  1. Competition under capacitated dynamic lot-sizing with capacity acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    2011-01-01

    Lot-sizing and capacity planning are important supply chain decisions, and competition and cooperation affect the performance of these decisions. In this paper, we look into the dynamic lot-sizing and resource competition problem of an industry consisting of multiple firms. A capacity competition...... production setup, along with inventory carrying costs. The individual production lots of each firm are limited by a constant capacity restriction, which is purchased up front for the planning horizon. The capacity can be purchased from a spot market, and the capacity acquisition cost fluctuates...

  2. Resenha de: Recueil des travaux historiques de Ferdinand Lot

    Directory of Open Access Journals (Sweden)

    Eurípedes Simões de Paula

    1968-03-01

    Full Text Available RECUEIL DES TRAVAUX HISTORIQUES DE FERDINAND LOT. Tome premier. Coleção "Hautes Études Médievales et Modernes". Centre de Recherches d'Histoire et de Philologie de la IVe Section de l'École Pratique des Hautes Études. Prefácio de Ch. Samaran e biografia por I. Vildé-Lot e M. Mahn-Lot. Publicado com o concurso do Centre National de la Recherche Scientifique. Genebra, Librairie Droz e Paris, Librairie Minard. In-89, XVIII -I- 780 pp

  3. Column-Oriented Database Systems (Tutorial)

    NARCIS (Netherlands)

    D. Abadi; P.A. Boncz (Peter); S. Harizopoulos

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as

  4. Cellular Manufacturing System with Dynamic Lot Size Material Handling

    Science.gov (United States)

    Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.

    2016-02-01

    Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.

  5. Optimal Multi-Level Lot Sizing for Requirements Planning Systems

    OpenAIRE

    Earle Steinberg; H. Albert Napier

    1980-01-01

    The wide spread use of advanced information systems such as Material Requirements Planning (MRP) has significantly altered the practice of dependent demand inventory management. Recent research has focused on development of multi-level lot sizing heuristics for such systems. In this paper, we develop an optimal procedure for the multi-period, multi-product, multi-level lot sizing problem by modeling the system as a constrained generalized network with fixed charge arcs and side constraints. T...

  6. Changing the values of parameters on lot size reorder point model

    Directory of Open Access Journals (Sweden)

    Chang Hung-Chi

    2003-01-01

    Full Text Available The Just-In-Time (JIT philosophy has received a great deal of attention. Several actions such as improving quality, reducing setup cost and shortening lead time have been recognized as effective ways to achieve the underlying goal of JIT. This paper considers the partial backorders, lot size reorder point inventory system with an imperfect production process. The objective is to simultaneously optimize the lot size, reorder point, process quality, setup cost and lead time, constrained on a service level. We assume the explicit distributional form of lead time demand is unknown but the mean and standard deviation are given. The minimax distribution free approach is utilized to solve the problem and a numerical example is provided to illustrate the results. .

  7. Root Systems of Individual Plants, and the Biotic and Abiotic Factors Controlling Their Depth and Distribution: a Synthesis Using a Global Database.

    Science.gov (United States)

    Tumber-Davila, S. J.; Schenk, H. J.; Jackson, R. B.

    2017-12-01

    This synthesis examines plant rooting distributions globally, by doubling the number of entries in the Root Systems of Individual Plants database (RSIP) created by Schenk and Jackson. Root systems influence many processes, including water and nutrient uptake and soil carbon storage. Root systems also mediate vegetation responses to changing climatic and environmental conditions. Therefore, a collective understanding of the importance of rooting systems to carbon sequestration, soil characteristics, hydrology, and climate, is needed. Current global models are limited by a poor understanding of the mechanisms affecting rooting, carbon stocks, and belowground biomass. This improved database contains an extensive bank of records describing the rooting system of individual plants, as well as detailed information on the climate and environment from which the observations are made. The expanded RSIP database will: 1) increase our understanding of rooting depths, lateral root spreads and above and belowground allometry; 2) improve the representation of plant rooting systems in Earth System Models; 3) enable studies of how climate change will alter and interact with plant species and functional groups in the future. We further focus on how plant rooting behavior responds to variations in climate and the environment, and create a model that can predict rooting behavior given a set of environmental conditions. Preliminary results suggest that high potential evapotranspiration and seasonality of precipitation are indicative of deeper rooting after accounting for plant growth form. When mapping predicted deep rooting by climate, we predict deepest rooting to occur in equatorial South America, Africa, and central India.

  8. The distribution of blood eosinophil levels in a Japanese COPD clinical trial database and in the rest of the world

    Science.gov (United States)

    Ishii, Takeo; Hizawa, Nobuyuki; Midwinter, Dawn; James, Mark; Hilton, Emma; Jones, Paul W

    2018-01-01

    Background Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS) for patients with chronic obstructive pulmonary disease (COPD). Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta2-agonist (LABA) treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients. Methods A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II–IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts). Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment. Findings Among Japanese patients, the median (interquartile range) absolute eosinophil count was 170 cells/mm3 (100–280 cells/mm3). Overall, 612/1,094 Japanese patients (56%) had an absolute eosinophil count ≥150 cells/mm3 and 902/1,304 Japanese patients (69%) had a percentage eosinophil ≥2%. Among non-Japanese patients, these values were 160 (100–250) cells/mm3, 2,842/5,151 patients (55%), and 2,937/5,155 patients (57%), respectively. The eosinophil distribution among Japanese patients was similar to that among non-Japanese patients. Within multi-country studies with similar inclusion criteria, the eosinophil count was numerically lower in Japanese compared with non-Japanese patients (median 120 vs 160 cells/mm3). Interpretation The eosinophil distribution in Japanese patients seems comparable to that of non-Japanese patients; although within multi-country studies, there was a slightly lower median eosinophil count for Japanese patients compared with

  9. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target-species distribution and abundance studies. Section 1, Part 2. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  10. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target species distribution and abundance studies. Section 1, Part 1. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  11. Column-Oriented Database Systems (Tutorial)

    OpenAIRE

    Abadi, D.; Boncz, Peter; Harizopoulos, S.

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table’s columns becomes faster, at the potential expense of excessive disk-head s...

  12. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  13. The distribution of blood eosinophil levels in a Japanese COPD clinical trial database and in the rest of the world

    Directory of Open Access Journals (Sweden)

    Barnes N

    2018-02-01

    Full Text Available Neil Barnes,1,2 Takeo Ishii,3,4 Nobuyuki Hizawa,5 Dawn Midwinter,6 Mark James,3 Emma Hilton,1 Paul Jones1,71Respiratory Medicine Franchise, GlaxoSmithKline, Brentford, UK; 2William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, London, UK; 3Medical Affairs, GlaxoSmithKline K.K., Tokyo, Japan; 4Graduate School of Medicine, Nippon Medical School, Tokyo, Japan; 5Department of Pulmonary Medicine, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan; 6Global Respiratory Department, GlaxoSmithKline, Stockley Park, UK; 7Institute of Infection and Immunity, St George’s University of London, London, UK Background: Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS for patients with chronic obstructive pulmonary disease (COPD. Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta2-agonist (LABA treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients.Methods: A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II–IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts. Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment.Findings: Among Japanese patients, the median (interquartile range absolute eosinophil count was 170 cells/mm3 (100–280 cells/mm3. Overall, 612/1,094 Japanese patients (56% had an absolute eosinophil count ≥150 cells/mm3 and 902/1,304 Japanese patients (69% had a percentage eosinophil ≥2%. Among non

  14. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high

  15. Distribution and classification of Serine β-lactamases in Brazilian Hospital Sewage and Other Environmental Metagenomes deposited in Public Databases

    Directory of Open Access Journals (Sweden)

    Adriana Fróes

    2016-11-01

    Full Text Available β-lactam is the most used antibiotic class in the clinical area and it acts on blocking the bacteria cell wall synthesis, causing cell death. However, some bacteria have evolved resistance to these antibiotics mainly due the production of enzymes known as β-lactamases. Hospital sewage is an important source of dispersion of multidrug-resistant bacteria in rivers and oceans. In this work, we used next-generation DNA sequencing to explore the diversity and dissemination of serine β-lactamases in two hospital sewage from Rio de Janeiro, Brazil (South -SZ- and North Zone -NZ, presenting different profiles, and to compare them with public environmental data available. Also, we propose a Hidden-Markov-Model approach to screen potential serine β-lactamases genes (in public environments samples and generated hospital sewage data, exploring its evolutionary relationships. Due to the high variability in β-lactamases, we used a position-specific scoring matrix search method (RPS-BLAST against conserved domain database profiles (CDD, Pfam, and COG followed by visual inspection to detect conserved motifs, to increase the reliability of the results and remove possible false positives. We were able to identify novel β-lactamases from Brazilian hospital sewage and to estimate relative abundance of its types. The highest relative abundance found in SZ was the Class A (50%, while Class D is predominant in NZ (55%. CfxA (65% and ACC (47% types were the most abundant genes detected in SZ, while in NZ the most frequent were OXA-10 (32%, CfxA (28%, ACC (21%, CEPA (20% and FOX (19%. Phylogenetic analysis revealed β-lactamases from Brazilian hospital sewage grouped in the same clade and close to sequences belonging to Firmicutes and Bacteroidetes groups, but distant from potential β-lactamases screened from public environmental data, that grouped closer to β-lactamases of Proteobacteria. Our results demonstrated that HMM-based approach identified homologs of

  16. Distribution and Classification of Serine β-Lactamases in Brazilian Hospital Sewage and Other Environmental Metagenomes Deposited in Public Databases.

    Science.gov (United States)

    Fróes, Adriana M; da Mota, Fábio F; Cuadrat, Rafael R C; Dávila, Alberto M R

    2016-01-01

    β-lactam is the most used antibiotic class in the clinical area and it acts on blocking the bacteria cell wall synthesis, causing cell death. However, some bacteria have evolved resistance to these antibiotics mainly due the production of enzymes known as β-lactamases. Hospital sewage is an important source of dispersion of multidrug-resistant bacteria in rivers and oceans. In this work, we used next-generation DNA sequencing to explore the diversity and dissemination of serine β-lactamases in two hospital sewage from Rio de Janeiro, Brazil (South Zone, SZ and North Zone, NZ), presenting different profiles, and to compare them with public environmental data available. Also, we propose a Hidden-Markov-Model approach to screen potential serine β-lactamases genes (in public environments samples and generated hospital sewage data), exploring its evolutionary relationships. Due to the high variability in β-lactamases, we used a position-specific scoring matrix search method (RPS-BLAST) against conserved domain database profiles (CDD, Pfam, and COG) followed by visual inspection to detect conserved motifs, to increase the reliability of the results and remove possible false positives. We were able to identify novel β-lactamases from Brazilian hospital sewage and to estimate relative abundance of its types. The highest relative abundance found in SZ was the Class A (50%), while Class D is predominant in NZ (55%). CfxA (65%) and ACC (47%) types were the most abundant genes detected in SZ, while in NZ the most frequent were OXA-10 (32%), CfxA (28%), ACC (21%), CEPA (20%), and FOX (19%). Phylogenetic analysis revealed β-lactamases from Brazilian hospital sewage grouped in the same clade and close to sequences belonging to Firmicutes and Bacteroidetes groups, but distant from potential β-lactamases screened from public environmental data, that grouped closer to β-lactamases of Proteobacteria. Our results demonstrated that HMM-based approach identified homologs of

  17. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  18. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  19. Spatial distribution of clinical computer systems in primary care in England in 2016 and implications for primary care electronic medical record databases: a cross-sectional population study.

    Science.gov (United States)

    Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M

    2018-02-28

    UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Shelf life extension for the lot AAE nozzle severance LSCs

    Science.gov (United States)

    Cook, M.

    1990-01-01

    Shelf life extension tests for the remaining lot AAE linear shaped charges for redesigned solid rocket motor nozzle aft exit cone severance were completed in the small motor conditioning and firing bay, T-11. Five linear shaped charge test articles were thermally conditioned and detonated, demonstrating proper end-to-end charge propagation. Penetration depth requirements were exceeded. Results indicate that there was no degradation in performance due to aging or the linear shaped charge curving process. It is recommended that the shelf life of the lot AAE nozzle severance linear shaped charges be extended through January 1992.

  1. Improving aggregate behavior in parking lots with appropriate local maneuvers

    KAUST Repository

    Rodriguez, Samuel

    2013-11-01

    In this paper we study the ingress and egress of pedestrians and vehicles in a parking lot. We show how local maneuvers executed by agents permit them to create trajectories in constrained environments, and to resolve the deadlocks between them in mixed-flow scenarios. We utilize a roadmap-based approach which allows us to map complex environments and generate heuristic local paths that are feasible for both pedestrians and vehicles. Finally, we examine the effect that some agent-behavioral parameters have on parking lot ingress and egress. © 2013 IEEE.

  2. Florabank1: a grid-based database on vascular plant distribution in the northern part of Belgium (Flanders and the Brussels Capital region

    Directory of Open Access Journals (Sweden)

    Wouter Van Landuyt

    2012-05-01

    Full Text Available Florabank1 is a database that contains distributional data on the wild flora (indigenous species, archeophytes and naturalised aliens of Flanders and the Brussels Capital Region. It holds about 3 million records of vascular plants, dating from 1800 till present. Furthermore, it includes ecological data on vascular plant species, redlist category information, Ellenberg values, legal status, global distribution, seed bank etc. The database is an initiative of “Flo.Wer” (www.plantenwerkgroep.be, the Research Institute for Nature and Forest (INBO: www.inbo.be and the National Botanic Garden of Belgium (www.br.fgov.be. Florabank aims at centralizing botanical distribution data gathered by both professional and amateur botanists and to make these data available to the benefit of nature conservation, policy and scientific research.The occurrence data contained in Florabank1 are extracted from checklists, literature and herbarium specimen information. Of survey lists, the locality name (verbatimLocality, species name, observation date and IFBL square code, the grid system used for plant mapping in Belgium (Van Rompaey 1943, is recorded. For records dating from the period 1972–2004 all pertinent botanical journals dealing with Belgian flora were systematically screened. Analysis of herbarium specimens in the collection of the National Botanic Garden of Belgium, the University of Ghent and the University of Liège provided interesting distribution knowledge concerning rare species, this information is also included in Florabank1. The data recorded before 1972 is available through the Belgian GBIF node (http://data.gbif.org/datasets/resource/10969/, not through FLORABANK1, to avoid duplication of information. A dedicated portal providing access to all published Belgian IFBL records at this moment is available at: http://projects.biodiversity.be/ifblAll data in Florabank1 is georeferenced. Every record holds the decimal centroid coordinates of the

  3. Reubicación del parque de transformadores de los sistemas de distribución de Bogotá D.C. mediante algoritmos genéticos Relocation of electric transformers lot in Bogotá distribution systems using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Johnn Alejandro Quintero Salazar

    2012-08-01

    maximize the recognition of assets that the regulator CREG (Comisión Reguladora de Energía y Gas made to the various network operators, as set out in resolution 097 of 2008. For the application of the algorithm, we obtained maximum active power measurements for each hour of the year 2009 in a number of transformers of different capacities, chosen at random, installed in the distribution system CODENSA SA ESP, the company that provides electric service in the Bogotá city. With this information we built representative daily load curves and developed a database that contains the operating costs of moving equipment and tariffs, from which it was possible to model the objective function and constraints of the problem, obtaining a high number of possible combinations (about 1*10(134 due to the large number of nodes and processors present in the distribution system. Conventional search of a solution in the above situation implies the use of prohibitive times. For this reason we implemented a classical genetic algorithm, thus obtaining an optimal solution that offers a financial gain in the first year, associated with the increase in the usage charge, of $ 253.446.362,47 (COP, profits that could be increased considerably when running the algorithm on larger transformers parks.

  4. The Lot Sizing and Scheduling of Sand Casting Operations

    NARCIS (Netherlands)

    Hans, Elias W.; van de Velde, S.L.; van de Velde, Steef

    2011-01-01

    We describe a real world case study that involves the monthly planning and scheduling of the sand-casting department in a metal foundry. The problem can be characterised as a single-level multi-item capacitated lot-sizing model with a variety of additional process-specific constraints. The main

  5. Activity Recognition and Localization on a Truck Parking Lot

    NARCIS (Netherlands)

    Andersson, M.; Patino, L.; Burghouts, G.J.; Flizikowski, A.; Evans, M.; Gustafsson, D.; Petersson, H.; Schutte, K.; Ferryman, J.

    2013-01-01

    In this paper we present a set of activity recognition and localization algorithms that together assemble a large amount of information about activities on a parking lot. The aim is to detect and recognize events that may pose a threat to truck drivers and trucks. The algorithms perform zone-based

  6. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  7. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  8. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  9. data mining in distributed database

    International Nuclear Information System (INIS)

    Ghunaim, A.A.A.

    2007-01-01

    as we march into the age of digital information, the collection and the storage of large quantities of data is increased, and the problem of data overload looms ominously ahead. it is estimated today that the volume of data stored by a company doubles every year but the amount of meaningful information is decreases rapidly. the ability to analyze and understand massive datasets lags far behind the ability to gather and store the data. the unbridled growth of data will inevitably lead to a situation in which it is increasingly difficult to access the desired information; it will always be like looking for a needle in a haystack, and where only the amount of hay will be growing all the time . so, a new generation of computational techniques and tools is required to analyze and understand the rapidly growing volumes of data . and, because the information technology (it) has become a strategic weapon in the modern life, it is needed to use a new decision support tools to be an international powerful competitor.data mining is one of these tools and its methods make it possible to extract decisive knowledge needed by an enterprise and it means that it concerned with inferring models from data , including statistical pattern recognition, applied statistics, machine learning , and neural networks. data mining is a tool for increasing productivity of people trying to build predictive models. data mining techniques have been applied successfully to several real world problem domains; but the application in the nuclear reactors field has only little attention . one of the main reasons, is the difficulty in obtaining the data sets

  10. Query Optimization in Distributed Databases.

    Science.gov (United States)

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  11. Can “Cleaned and Greened” Lots Take on the Role of Public Greenspace?

    Science.gov (United States)

    Megan Heckert; Michelle Kondo

    2018-01-01

    Cities are increasingly greening vacant lots to reduce blight. Such programs could reduce inequities in urban greenspace access, but whether and how greened lots are used remains unclear. We surveyed three hundred greened lots in Philadelphia for signs of use and compared characteristics of used and nonused lots. We found physical signs of use that might be found in...

  12. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  13. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  14. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  15. COAP BASED ACUTE PARKING LOT MONITORING SYSTEM USING SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    R. Aarthi

    2014-06-01

    Full Text Available Vehicle parking is the act of temporarily maneuvering a vehicle in to a certain location. To deal with parking monitoring system issue such as traffic, this paper proposes a vision of improvements in monitoring the vehicles in parking lots based on sensor networks. Most of the existing paper deals with that of the automated parking which is of cluster based and each has its own overheads like high power, less energy efficiency, incompatible size of lots, space. The novel idea in this work is usage of CoAP (Constrained Application Protocol which is recently created by IETF (draft-ietf-core-coap-18, June 28, 2013, CoRE group to develop RESTful application layer protocol for communications within embedded wireless networks. This paper deals with the enhanced CoAP protocol using multi hop flat topology, which makes the acuters feel soothe towards parking vehicles. We aim to minimize the time consumed for finding free parking lot as well as increase the energy efficiency

  16. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  17. Egg and a lot of science: an interdisciplinary experiment

    OpenAIRE

    Gayer, M. C.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; T., Rodrigues D.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; Denardin, E. L.G.; Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; Roehrs, R.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil

    2014-01-01

    Egg and a lot of science: an interdisciplinary experimentGayer, M.C.1,2;Rodrigues, D.T.1,2; Escoto, D.F.1; Denardin, E.L.G.2, Roehrs, R.1,21Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil2Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, BrazilIntroduction: How to tell if an egg is rotten? How to calculate the volume of an egg? Because the rotten egg float? Why has this...

  18. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  19. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  20. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  1. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  2. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  3. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  4. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  5. Heritage plaza parking lots improvement project- Solar PV installation

    Energy Technology Data Exchange (ETDEWEB)

    Hooks, Todd [Agua Caliente Indian Reservation, Palm Springs, CA (United States)

    2017-03-31

    The Agua Caliente Band of Cahuilla Indians (ACBCI or the “Tribe”) installed a 79.95 kW solar photovoltaic (PV) system to offset the energy usage costs of the Tribal Education and Family Services offices located at the Tribe's Heritage Plaza office building, 90I Tahquitz Way, Palm Springs, CA, 92262 (the "Project"). The installation of the Solar PV system was part of the larger Heritage Plaza Parking Lot Improvements Project and mounted on the two southern carport shade structures. The solar PV system will offset 99% of the approximately 115,000 kWh in electricity delivered annually by Southern California Edison (SCE) to the Tribal Education and Family Services offices at Heritage Plaza, reducing their annual energy costs from approximately $22,000 annually to approximately $200. The total cost of the proposed solar PV system is $240,000.

  6. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  7. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  8. Water in the Balance: A Parking Lot Story

    Science.gov (United States)

    Haas, N. A.; Vitousek, S.

    2017-12-01

    The greater Chicagoland region has seen a high degree of urbanization since 1970. For example, between 1970-1990 the region experienced 4% population growth, a 35% increase in urban land use, and approximately 454 square miles of agricultural land was mostly converted into urban uses. Transformation of land into urban uses in the Chicagoland region has altered the stream and catchment response to rainfall events, specifically an increase in stream flashiness and increase in urban flooding. Chicago has begun to address these changes through green infrastructure. To understand the impact of green infrastructure at local, city-wide, and watershed scales, individual projects need to be accurately and sufficiently modeled. A traditional parking lot conversion into a porous parking lot at the University of Illinois at Chicago was modeled using SWMM and scrutinized using field data to look at stormwater runoff and water balance prior and post reconstruction. SWMM modeling suggested an 87% reduction in peak flow as well as a 100% reduction in flooding for a 24 hour, 1.72-inch storm. For the same storm, field data suggest an 89% reduction in peak flow as well as a 100% reduction in flooding. Modeling suggested 100% reductions in flooding for longer duration storms (24 hour+) and a smaller reduction in peak flow ( 66%). The highly parameterized SWMM model agrees well with collected data and analysis. Further effort is being made to use data mining to create correlations within the collected datasets that can be integrated into a model that follows a standardized formation process and reduces parameterization.

  9. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  10. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  11. Lot quality assurance sampling for monitoring immunization programmes: cost-efficient or quick and dirty?

    Science.gov (United States)

    Sandiford, P

    1993-09-01

    In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.

  12. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Record Dynamics and the Parking Lot Model for granular dynamics

    Science.gov (United States)

    Sibani, Paolo; Boettcher, Stefan

    Also known for its application to granular compaction (E. Ben-Naim et al., Physica D, 1998), the Parking Lot Model (PLM) describes the random parking of identical cars in a strip with no marked bays. In the thermally activated version considered, cars can be removed at an energy cost and, in thermal equilibrium, their average density increases as temperature decreases. However, equilibration at high density becomes exceedingly slow and the system enters an aging regime induced by a kinematic constraint, the fact that parked cars may not overlap. As parking an extra car reduces the available free space,the next parking event is even harder to achieve. Records in the number of parked cars mark the salient features of the dynamics and are shown to be well described by the log-Poisson statistics known from other glassy systems with record dynamics. Clusters of cars whose positions must be rearranged to make the next insertion possible have a length scale which grows logarithmically with age, while their life-time grows exponentially with size. The implications for a recent cluster model of colloidal dynamics,(S. Boettcher and P. Sibani, J. Phys.: Cond. Matter, 2011 N. Becker et al., J. Phys.: Cond. Matter, 2014) are discussed. Support rom the Villum Foundation is gratefully acknowledged.

  14. Balancing Urban Biodiversity Needs and Resident Preferences for Vacant Lot Management

    Directory of Open Access Journals (Sweden)

    Christine C. Rega-Brodsky

    2018-05-01

    Full Text Available Urban vacant lots are often a contentious feature in cities, seen as overgrown, messy eyesores that plague neighborhoods. We propose a shift in this perception to locations of urban potential, because vacant lots may serve as informal greenspaces that maximize urban biodiversity while satisfying residents’ preferences for their design and use. Our goal was to assess what kind of vacant lots are ecologically valuable by assessing their biotic contents and residents’ preferences within a variety of settings. We surveyed 150 vacant lots throughout Baltimore, Maryland for their plant and bird communities, classified the lot’s setting within the urban matrix, and surveyed residents. Remnant vacant lots had greater vegetative structure and bird species richness as compared to other lot origins, while vacant lot settings had limited effects on their contents. Residents preferred well-maintained lots with more trees and less artificial cover, support of which may increase local biodiversity in vacant lots. Collectively, we propose that vacant lots with a mixture of remnant and planted vegetation can act as sustainable urban greenspaces with the potential for some locations to enhance urban tree cover and bird habitat, while balancing the needs and preferences of city residents.

  15. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  16. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  17. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  18. Database Perspectives on Blockchains

    OpenAIRE

    Cohen, Sara; Zohar, Aviv

    2018-01-01

    Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the u...

  19. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  20. Egg and a lot of science: an interdisciplinary experiment

    Directory of Open Access Journals (Sweden)

    M. C. Gayer

    2014-08-01

    Full Text Available Egg and a lot of science: an interdisciplinary experimentGayer, M.C.1,2;Rodrigues, D.T.1,2; Escoto, D.F.1; Denardin, E.L.G.2, Roehrs, R.1,21Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil2Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, BrazilIntroduction: How to tell if an egg is rotten? How to calculate the volume of an egg? Because the rotten egg float? Why has this characteristic rotten egg smell? Because the gray-green color is formed on the surface of the cooked egg yolk? These issues are commonplace and unnoticed in day-to-day. Our grandmothers know how to tell if an egg is rotten or not, you just put the egg in a glass of water. If it is rotten floating, sinking is good. But why this happens? That they do not know answer. With only one egg chemical reactions work, macromolecules (proteins, density, membranes and conservation of matter. Hydrogen sulphide is responsible for the aroma of a freshly cooked egg. This gas as they break down the molecules of albumin, a protein present in the egg is formed. The color comes from a sulfide precipitation, this time with the Fe2+ ion contained in the yolk (Fe2+ + S2  FeS. The use of simple and easy to perform experiments, correlating various knowledge proves a very useful tool in science education. Objectives: Develop multidisciplinary learning contents through the problem. Materials and methods: The teacher provides students with a boiled egg, salt, a syringe and a cup, a plate and water. The teacher lays the aforementioned issues for students and allows them to exchange information with each other, seeking answers through experimentation. Results and discussion: Students engaged with the activity and interaction of groups in order to solve the proposed problem. Still, through trial and error have sought in various ways to find the answers. This tool takes the student to

  1. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  2. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  3. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  4. EMODnet Thematic Lot n° 4 - Chemistry

    DEFF Research Database (Denmark)

    Beckers, Jean-Marie; Buga, Luminita; Debray, Noelie

    2015-01-01

    Data quality assurance and quality control (QA/QC) is an important issue in oceanographic data management, especially for the creation of multidisciplinary and comprehensive databases which include data from different and/or unknown origin covering long time periods. The data-collection methods i...... inconsistent data quality flags and the need for coordination and harmonization of practices. A dedicated workshop was organized to review the different practices and agree on a common methodology for data QA/QC and Diva products generation for EMODnet Chemistry....... will contribute considerably to the validation of large data collections. This report intends to be a reference manual for EMODnet Chemistry data QA/QC and the subsequent product generation. In fact, during the first data validation loop, each region adopted its own protocol and the results showed many...

  5. Estimating medication stopping fraction and real-time prevalence of drug use in pharmaco-epidemiologic databases. An application of the reverse waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2017-01-01

    Purpose: To introduce the reverse waiting time distribution (WTD) and show how it can be used to estimate stopping fractions and real-time prevalence of treatment in pharmacoepidemiological studies. Methods: The reverse WTD is the distribution of time from the last dispensed prescription of each......-hoc decision rules for automated implementations, and it yields estimates of real-time prevalence....... patient within a time window to the end of it. It is a mirrored version of the ordinary WTD, which considers the first dispensed prescription of patients within a time window. Based on renewal process theory, the reverse WTD can be analyzed as an ordinary WTD with maximum likelihood estimation. Based...

  6. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  7. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    OpenAIRE

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined.

  8. Vertical distribution of chlorophyll a concentration and phytoplankton community composition from in situ fluorescence profiles: a first database for the global ocean

    Science.gov (United States)

    Sauzède, R.; Lavigne, H.; Claustre, H.; Uitz, J.; Schmechtig, C.; D'Ortenzio, F.; Guinet, C.; Pesant, S.

    2015-10-01

    In vivo chlorophyll a fluorescence is a proxy of chlorophyll a concentration, and is one of the most frequently measured biogeochemical properties in the ocean. Thousands of profiles are available from historical databases and the integration of fluorescence sensors to autonomous platforms has led to a significant increase of chlorophyll fluorescence profile acquisition. To our knowledge, this important source of environmental data has not yet been included in global analyses. A total of 268 127 chlorophyll fluorescence profiles from several databases as well as published and unpublished individual sources were compiled. Following a robust quality control procedure detailed in the present paper, about 49 000 chlorophyll fluorescence profiles were converted into phytoplankton biomass (i.e., chlorophyll a concentration) and size-based community composition (i.e., microphytoplankton, nanophytoplankton and picophytoplankton), using a method specifically developed to harmonize fluorescence profiles from diverse sources. The data span over 5 decades from 1958 to 2015, including observations from all major oceanic basins and all seasons, and depths ranging from the surface to a median maximum sampling depth of around 700 m. Global maps of chlorophyll a concentration and phytoplankton community composition are presented here for the first time. Monthly climatologies were computed for three of Longhurst's ecological provinces in order to exemplify the potential use of the data product. Original data sets (raw fluorescence profiles) as well as calibrated profiles of phytoplankton biomass and community composition are available on open access at PANGAEA, Data Publisher for Earth and Environmental Science. Raw fluorescence profiles: http://doi.pangaea.de/10.1594/PANGAEA.844212 and Phytoplankton biomass and community composition: http://doi.pangaea.de/10.1594/PANGAEA.844485

  9. Nutrient concentrations in leachate and runoff from dairy cattle lots with different surface materials

    Science.gov (United States)

    Nitrogen (N) and phosphorus (P) loss from agriculture persists as a water quality issue, and outdoor cattle lots can have a high loss potential. We monitored hydrology and nutrient concentrations in leachate and runoff from dairy heifer lots constructed with three surface materials (soil, sand, bark...

  10. Applications of the lots computer code to laser fusion systems and other physical optics problems

    International Nuclear Information System (INIS)

    Lawrence, G.; Wolfe, P.N.

    1979-01-01

    The Laser Optical Train Simulation (LOTS) code has been developed at the Optical Sciences Center, University of Arizona under contract to Los Alamos Scientific Laboratory (LASL). LOTS is a diffraction based code designed to beam quality and energy of the laser fusion system in an end-to-end calculation

  11. MODEL JOINT ECONOMIC LOT SIZE PADA KASUS PEMASOK-PEMBELI DENGAN PERMINTAAN PROBABILISTIK

    Directory of Open Access Journals (Sweden)

    Wakhid Ahmad Jauhari

    2009-01-01

    Full Text Available In this paper we consider single vendor single buyer integrated inventory model with probabilistic demand and equal delivery lot size. The model contributes to the current literature by relaxing the deterministic demand assumption which has been used for almost all integrated inventory models. The objective is to minimize expected total costs incurred by the vendor and the buyer. We develop effective iterative procedures for finding the optimal solution. Numerical examples are used to illustrate the benefit of integration. A sensitivity analysis is performed to explore the effect of key parameters on delivery lot size, safety factor, production lot size factor and the expected total cost. The results of the numerical examples indicate that our models can achieve a significant amount of savings. Finally, we compare the results of our proposed model with a simulation model. Abstract in Bahasa Indonesia: Pada penelitian ini akan dikembangkan model gabungan pemasok-pembeli dengan permintaan probabilistik dan ukuran pengiriman sama. Pada model setiap lot pemesanan akan dikirim dalam beberapa lot pengiriman dan pemasok akan memproduksi barang dalam ukuran batch produksi yang merupakan kelipatan integer dari lot pengiriman. Dikembangkan pula suatu algoritma untuk menyelesaikan model matematis yang telah dibuat. Selain itu, pengaruh perubahan parameter terhadap perilaku model diteliti dengan analisis sensitivitas terhadap beberapa parameter kunci, seperti ukuran lot, stok pengaman dan total biaya persediaan. Pada penelitian ini juga dibuat model simulasi untuk melihat performansi model matematis pada kondisi nyata. Kata kunci: model gabungan, permintaan probabilistik, lot pengiriman, supply chain

  12. LOD-a-lot : A queryable dump of the LOD cloud

    NARCIS (Netherlands)

    Fernández, Javier D.; Beek, Wouter; Martínez-Prieto, Miguel A.; Arias, Mario

    2017-01-01

    LOD-a-lot democratizes access to the Linked Open Data (LOD) Cloud by serving more than 28 billion unique triples from 650, K datasets over a single self-indexed file. This corpus can be queried online with a sustainable Linked Data Fragments interface, or downloaded and consumed locally: LOD-a-lot

  13. A comparison of particle swarm optimizations for uncapacitated multilevel lot-sizin problems

    NARCIS (Netherlands)

    Han, Y.; Kaku, I.; Tang, J.; Dellaert, N.P.; Cai, J.; Li, Y.

    2010-01-01

    The multilevel lot-sizing (MLLS) problem is a key production planning problem in the material requirement planning (MRP) system. The MLLS problem deals with determining the production lot sizes of various items appearing in the product structure over a given finite planning horizon to minimize the

  14. 21 CFR 610.1 - Tests prior to release required for each lot.

    Science.gov (United States)

    2010-04-01

    ....1 Section 610.1 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... release required for each lot. No lot of any licensed product shall be released by the manufacturer prior... considered in determining whether or not the test results meet the test objective, except that a test result...

  15. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  16. Creating a database for evaluating the distribution of energy deposited at prostate using simulation in phantom with the Monte Carlo code EGSnrc

    International Nuclear Information System (INIS)

    Resende Filho, T.A.; Vieira, I.F.; Leal Neto, V.

    2009-01-01

    An exposition computational model (ECM) composed of a water tank phantom, a punctual and mono energetic source, emitter of photons, coupled to a Monte Carlo code to simulation the interaction and deposition of energy emitted by I-125, is a tool that presents many advantages to realize dosimetric evaluations in many areas as planning of a brachytherapy treatments. Using the DOSXYZnrc, was possible to construct a data bank allowing the final user estimates previously the space distribution of the prostate dose, being an important tool at the brachytherapy procedure. The results obtained show the fractional energy deposited into the water phantom evaluated on the energies 0.028 MeV and 0.035 MeV both indicated to this procedure, as well the dose distribution at the range between 0.10334 and 0.53156 μGy. The medium error is less than 2%, limited tolerance value considered at radiotherapy protocols. (author)

  17. Is the spatial distribution of brain lesions associated with closed-head injury predictive of subsequent development of attention-deficit/hyperactivity disorder? Analysis with brain-image database

    Science.gov (United States)

    Herskovits, E. H.; Megalooikonomou, V.; Davatzikos, C.; Chen, A.; Bryan, R. N.; Gerring, J. P.

    1999-01-01

    PURPOSE: To determine whether there is an association between the spatial distribution of lesions detected at magnetic resonance (MR) imaging of the brain in children after closed-head injury and the development of secondary attention-deficit/hyperactivity disorder (ADHD). MATERIALS AND METHODS: Data obtained from 76 children without prior history of ADHD were analyzed. MR images were obtained 3 months after closed-head injury. After manual delineation of lesions, images were registered to the Talairach coordinate system. For each subject, registered images and secondary ADHD status were integrated into a brain-image database, which contains depiction (visualization) and statistical analysis software. Using this database, we assessed visually the spatial distributions of lesions and performed statistical analysis of image and clinical variables. RESULTS: Of the 76 children, 15 developed secondary ADHD. Depiction of the data suggested that children who developed secondary ADHD had more lesions in the right putamen than children who did not develop secondary ADHD; this impression was confirmed statistically. After Bonferroni correction, we could not demonstrate significant differences between secondary ADHD status and lesion burdens for the right caudate nucleus or the right globus pallidus. CONCLUSION: Closed-head injury-induced lesions in the right putamen in children are associated with subsequent development of secondary ADHD. Depiction software is useful in guiding statistical analysis of image data.

  18. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible i...

  19. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  20. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    Science.gov (United States)

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  1. Global review of health care surveys using lot quality assurance sampling (LQAS), 1984-2004.

    Science.gov (United States)

    Robertson, Susan E; Valadez, Joseph J

    2006-09-01

    We conducted a global review on the use of lot quality assurance sampling (LQAS) to assess health care services, health behaviors, and disease burden. Publications and reports on LQAS surveys were sought from Medline and five other electronic databases; the World Health Organization; the World Bank; governments, nongovernmental organizations, and individual scientists. We identified a total of 805 LQAS surveys conducted by different management groups during January 1984 through December 2004. There was a striking increase in the annual number of LQAS surveys conducted in 2000-2004 (128/year) compared with 1984-1999 (10/year). Surveys were conducted in 55 countries, and in 12 of these countries there were 10 or more LQAS surveys. Geographically, 317 surveys (39.4%) were conducted in Africa, 197 (28.5%) in the Americas, 115 (14.3%) in the Eastern Mediterranean, 114 (14.2%) in South-East Asia, 48 (6.0%) in Europe, and 14 (1.8%) in the Western Pacific. Health care parameters varied, and some surveys assessed more than one parameter. There were 320 surveys about risk factors for HIV/AIDS/sexually transmitted infections; 266 surveys on immunization coverage, 240 surveys post-disasters, 224 surveys on women's health, 142 surveys on growth and nutrition, 136 surveys on diarrheal disease control, and 88 surveys on quality management. LQAS surveys to assess disease burden included 23 neonatal tetanus mortality surveys and 12 surveys on other diseases. LQAS is a practical field method which increasingly is being applied in assessment of preventive and curative health services, and may offer new research opportunities to social scientists. When LQAS data are collected recurrently at multiple time points, they can be used to measure the spatial variation in behavior change. Such data provide insight into understanding relationships between various investments in social, human, and physical capital, and into the effectiveness of different public health strategies in achieving

  2. Database interfaces on NASA's heterogeneous distributed database system

    Science.gov (United States)

    Huang, Shou-Hsuan Stephen

    1989-01-01

    The syntax and semantics of all commands used in the template are described. Template builders should consult this document for proper commands in the template. Previous documents (Semiannual reports) described other aspects of this project. Appendix 1 contains all substituting commands used in the system. Appendix 2 includes all repeating commands. Appendix 3 is a collection of DEFINE templates from eight different DBMS's.

  3. A Comparative Study on the Lot Release Systems for Vaccines as of 2016.

    Science.gov (United States)

    Fujita, Kentaro; Naito, Seishiro; Ochiai, Masaki; Konda, Toshifumi; Kato, Atsushi

    2017-09-25

    Many countries have already established their own vaccine lot release system that is designed for each country's situation: while the World Health Organization promotes for the convergence of these regulatory systems so that vaccines of assured quality are provided globally. We conducted a questionnaire-based investigation of the lot release systems for vaccines in 7 countries and 2 regions. We found that a review of the summary protocol by the National Regulatory Authorities was commonly applied for the independent lot release of vaccines, however, we also noted some diversity between countries, especially in regard to the testing policy. Some countries and regions, including Japan, regularly tested every lot of vaccines, whereas the frequency of these tests was reduced in other countries and regions as determined based on the risk assessment of these products. Test items selected for the lot release varied among the countries or regions investigated, although there was a tendency to prioritize the potency tests. An understanding of the lot release policy may contribute to improving and harmonizing the lot release system globally in the future.

  4. License - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE License License to Use This Database Last updated : 2014/07/17 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database. The license for this datab...re Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: ... ...re . With regard to this database, you are licensed to: freely access part or whole of this database, and ac...quire data; freely redistribute part or whole of the data from this database; and freely create and distribute database

  5. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya.

    Directory of Open Access Journals (Sweden)

    Julia Jezmir

    Full Text Available To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS methodology.The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance.This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains.Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints.

  6. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya.

    Science.gov (United States)

    Jezmir, Julia; Cohen, Ted; Zignol, Matteo; Nyakan, Edwin; Hedt-Gauthier, Bethany L; Gardner, Adrian; Kamle, Lydia; Injera, Wilfred; Carter, E Jane

    2016-01-01

    To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB) in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS) methodology. The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance. This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains. Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints.

  7. Lot quality assurance sampling for screening communities hyperendemic for Schistosoma mansoni.

    Science.gov (United States)

    Rabarijaona, L P; Boisier, P; Ravaoalimalala, V E; Jeanne, I; Roux, J F; Jutand, M A; Salamon, R

    2003-04-01

    Lot quality assurance sampling (LQAS) was evaluated for rapid low cost identification of communities where Schistosoma mansoni infection was hyperendemic in southern Madagascar. In the study area, S. mansoni infection shows very focused and heterogeneous distribution requiring multifariousness of local surveys. One sampling plan was tested in the field with schoolchildren and several others were simulated in the laboratory. Randomization and stool specimen collection were performed by voluntary teachers under direct supervision of the study staff and no significant problem occurred. As expected from Receiver Operating Characteristic (ROC) curves, all sampling plans allowed correct identification of hyperendemic communities and of most of the hypoendemic ones. Frequent misclassifications occurred for communities with intermediate prevalence and the cheapest plans had very low specificity. The study confirmed that LQAS would be a valuable tool for large scale screening in a country with scarce financial and staff resources. Involving teachers, appeared to be quite feasible and should not lower the reliability of surveys. We recommend that the national schistosomiasis control programme systematically uses LQAS for identification of communities, provided that sample sizes are adapted to the specific epidemiological patterns of S. mansoni infection in the main regions.

  8. Cluster-sample surveys and lot quality assurance sampling to evaluate yellow fever immunisation coverage following a national campaign, Bolivia, 2007.

    Science.gov (United States)

    Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier

    2009-03-01

    To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.

  9. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  10. LOT Project long term test of buffer material at the Aespoe HRL

    International Nuclear Information System (INIS)

    Karnland, O.; Olsson, S.; Dueck, A.; Birgersson, M.; Nilsson, U.; Hernan-Haakansson, T.; Pedersen, K.; Eriksson, S.; Eriksen, T.; Eriksson, S.; Rosborg, B.; Muurinen, A.; Rousset, D.; Mosser-Ruck, R.; Cathelineau, M.; Villieras, F.; Pelletier, M.; Kaufold, S.; Dohrmann, R.; Fernandez, R.; Maeder, U.; Koroleva, M.

    2010-01-01

    Document available in extended abstract form only. Bentonite clay has been proposed as buffer material in several concepts for HLW repositories. The decaying spent fuel in the HLW canisters will increase temperature of the bentonite buffer. A number of laboratory test series, made by different research groups, have resulted in various bentonite alteration models. According to these models no significant alteration of the buffer is expected to take place at the prevailing physico-chemical conditions in the proposed Swedish KBS-3 repository, neither during, nor after water saturation. The ongoing LOT test series is focused on quantifying the mineralogical alteration in the buffer in a repository like environment at the Aespoe HRL. Further, buffer related processes concerning bacterial survival/activity, cation transport, and copper corrosion are studied. In total, the LOT test series includes seven test parcels, of which three are exposed to standard KBS-3 conditions and four test parcels are exposed to adverse conditions. Each test parcel contains a central Cu-tube surrounded by bentonite cylinder rings with a diameter of 30 cm, additional test material (Cu coupons, 60 Co tracers, bacteria etc) and instruments. Electrical heaters were place within the copper tube in order to simulate effect of decaying power from the spent fuel. The entire test parcels were released from the rock after the field exposure by overlapping boring and the bentonite material was analyzed with respect to: - physical properties (water content, density, swelling pressure, hydraulic conductivity, rheology); - mineralogical alteration in the bentonite; - distribution of added substances (e.g diffusional transport of 60 Co); - copper corrosion; - bacterial survival/activity. Two one year tests were started in 1996 and terminated in 1998. The results from tests and analyses are presented in SKB TR-00-22. The remaining four test parcels were installed during the fall 1999 plus one additional one

  11. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  12. The 'Thinking a Lot' Idiom of Distress and PTSD: An Examination of Their Relationship among Traumatized Cambodian Refugees Using the 'Thinking a Lot' Questionnaire

    NARCIS (Netherlands)

    Hinton, D.E.; Reis, R.; de Jong, J.

    2015-01-01

    "Thinking a lot" (TAL)—also referred to as "thinking too much"—is a key complaint in many cultural contexts, and the current article profiles this idiom of distress among Cambodian refugees. The article also proposes a general model of how TAL generates various types of distress that then cause

  13. Lots of Small Stars Born in Starburst Region

    Science.gov (United States)

    1999-10-01

    study a starburst region on a star-by-star basis down to this low mass limit. For comparison, the most sensitive observations of the more distant Tarantula Nebula only reach down to a limit of about 1 solar mass. A most important conclusion of this study is that there are lots of sub-solar mass stars in NGC 3603 , i.e., contrary to several theoretical predictions, these low-mass stars do form in violent starbursts ! The overall age of stars in the contraction phase that are located in the innermost region of NGC 3603 was found to be 300,000 - 1,000,000 years. The counts clearly show that this cluster is well populated in sub-solar mass stars. The next steps The team describes these new results in a scientific article ( "Low-mass stars in the massive HII region NGC 3603 - Deep NIR imaging with ANTU/ISAAC") that will appear in the European research journal Astronomy & Astrophysics in December 1999. Further information about related work on NGC 3603 is available at a dedicated webpage. The present VLT data will now be used for continued studies during which the limits of detection and measurement will be further pushed by means of advanced image processing and analysis. It will also be interesting to look further into possible variations of the number of stars with a given mass over the observed field, not least, to compare the new results with other ongoing studies of different regions (although less massive), e.g. with the Hubble Space Telescope and its infrared instrument NICMOS or with ground-based Adaptive Optics instruments. Notes [1] The team consists of Bernhard Brandl (Principal Investigator; Cornell University, Ithaca, New York, USA), Wolfgang Brandner (University of Hawaii, Honolulu, USA), Frank Eisenhauer (Max-Planck-Institut für Extraterrestrische Physik, Garching, Germany), Anthony F.J. Moffat (Université de Montreal, Canada), Francesco Palla (Osservatorio Astrofisico di Arcetri, Florence, Italy) and Hans Zinnecker (Astrophysikalisches Institut Potsdam

  14. Dying Stars Indicate Lots of Dark Matter in Giant Galaxy

    Science.gov (United States)

    1994-04-01

    nebulae at once. In view of the very long exposure times needed, this is an absolute must in order to perform these observations within the available telescope time. Before the observations can begin, the exact positions of the planetary nebulae are measured. A metal mask is then prepared with holes that permit the light from these objects to pass into EMMI, but at the same time blocks most of the much brighter, disturbing light emitted the by Earth's atmosphere. With an additional optical filter, all but the green light is effectively filtered out; this further ``removes'' unwanted light and improves the chances of effective registration of the faint light from the planetary nebulae in NGC 1399. VELOCITIES OF PLANETARY NEBULAE IN NGC 1399 The careful preparations paid off and this observational strategy was successful. During two of the allocated nights (the third was lost due to bad weather), the Australian observers (Magda Arnaboldi and Ken Freeman) were able for the first time to measure individual velocities for 37 planetary nebulae in NGC 1399. Some of these are indicated on the picture that accompanies this Press Release. The difficulty of this observation is illustrated by the fact that in order to catch enough light from these faint objects, the total exposure time was no less than 5 hours and only one field on either side of the galaxy could be observed per night. Already at the telescope the astronomers realised that the new results are very exciting; this was fully confirmed by the following long and complicated process of data reduction. In fact, although the inner parts of this galaxy rotate quite slowly, the planetary nebulae in the outer regions are in rapid motion and clearly indicate a fast rotation of these parts of the galaxy. This new observation is just as expected from the above described theory for the formation of giant galaxies and therefore provides very strong support for this theory. LOTS OF DARK MATTER IN NGC 1399 Perhaps the most exciting

  15. Design Schematics for a Sustainable Parking Lot: Building 2-2332, ENRD Classroom, Fort Bragg, NC

    National Research Council Canada - National Science Library

    Stumpf, Annette

    2003-01-01

    ...) was tasked with planning a sustainable design "charrette" to explore and develop alternative parking lot designs that would meet Fort Bragg's parking needs, as well as its need to meet sustainable...

  16. PENENTUAN PRODUCTION LOT SIZES DAN TRANSFER BATCH SIZES DENGAN PENDEKATAN MULTISTAGE

    Directory of Open Access Journals (Sweden)

    Purnawan Adi W

    2012-02-01

    Full Text Available Pengendalian dan perawatan inventori merupakan suatu permasalahan yang sering dihadapi seluruh organisasi dalam berbagai sektor ekonomi. Salah satu tantangan yang yang harus dihadapi dalam pengendalian inventori adalah bagaimana menentukan ukuran lot yang optimal pada suatu sistem produksi dengan berbagai tipe. Analisis batch produksi (production lot dengan pendekatan hybrid simulasi analitik merupakan salah satu penelitian mengenai ukuran lot optimal. Penelitian tersebut menggunakan pendekatan sistem singlestage dimana tidak adanya hubungan antar proses di setiap stage atau dengan kata lain, proses yang satu independen terhadap proses yang lain. Dengan menggunakan objek penelitian yang sama dengan objek penelitian diatas, penelitian ini kemudian mengangkat permasalahan penentuan ukuran production lot dengan pendekatan multistage. Pertama, dengan menggunakan data-data yang sama dengan penelitian sebelumnya ditentukan ukuran production lot yang optimal dengan metode programa linier. Selanjutnya ukuran production lot digunakan sebegai input simulasi untuk menentukan ukuran transfer batch. Rata-rata panjang antrian dan waktu tunggu menjadi ukuran performansi yang digunakan sebagai acuan penentuan ukuran transfer batch dari beberapa alternatif ukuran yang ada. Pada penelitian ini, ukuran production lot yang dihasilkan sama besarnya dengan demand tiap periode. Sedangkan untuk ukuran transfer batch, hasil penentuan dengan menggunakan simulasi kemudian dimplementasikan ke dalam model. Hasilnya adalah adanya penurunan inventori yang terjadi sebesar 76,35% untuk produk connector dan 50,59% untuk produk box connector dari inventori yang dihasilkan dengan pendekatan singlestage. Kata kunci : multistage, production lot, transfer batch     Abstract   Inventory maintenance and inventory control is a problem that often faced by all organization in many economic sectors. One of challenges that must be faced in inventory control is how to determine the

  17. MOTIVASI PEREMPUAN MEMBUKA USAHA SEKTOR INFORMAL DI DAYA TARIK WISATA TANAH LOT, TABANAN

    Directory of Open Access Journals (Sweden)

    Luh Putu Aritiana Kumala Pratiwi

    2016-08-01

    Full Text Available The development of tourism in Tanah Lot has been able to open up opportunities for local women. The businesses that mostly cultivated by women are the selling of traditional snacks of klepon, postcards, and hairpins.Women who participate should reconsider their decision to choose a dual role, both as housewives and sellers in Tanah Lot.This article analyzes the motivation of Women in opening a business in Tanah Lot area.The results showed that the motivation of women to open a business in the informal sector in Tanah Lot, namely to be able to meet the physiological needs, safety needs, affiliations, appreciation, self-actualization, and add to work experience. The factors that affect women’s motivations are internal factors such as age, educational background, family income, and marital status. While the external factors namely selling location, the condition of selling place, and having their own income.

  18. A Heuristic Approach for Determining Lot Sizes and Schedules Using Power-of-Two Policy

    Directory of Open Access Journals (Sweden)

    Esra Ekinci

    2007-01-01

    Full Text Available We consider the problem of determining realistic and easy-to-schedule lot sizes in a multiproduct, multistage manufacturing environment. We concentrate on a specific type of production, namely, flow shop type production. The model developed consists of two parts, lot sizing problem and scheduling problem. In lot sizing problem, we employ binary integer programming and determine reorder intervals for each product using power-of-two policy. In the second part, using the results obtained of the lot sizing problem, we employ mixed integer programming to determine schedules for a multiproduct, multistage case with multiple machines in each stage. Finally, we provide a numerical example and compare the results with similar methods found in practice.

  19. Hybrid Discrete Differential Evolution Algorithm for Lot Splitting with Capacity Constraints in Flexible Job Scheduling

    Directory of Open Access Journals (Sweden)

    Xinli Xu

    2013-01-01

    Full Text Available A two-level batch chromosome coding scheme is proposed to solve the lot splitting problem with equipment capacity constraints in flexible job shop scheduling, which includes a lot splitting chromosome and a lot scheduling chromosome. To balance global search and local exploration of the differential evolution algorithm, a hybrid discrete differential evolution algorithm (HDDE is presented, in which the local strategy with dynamic random searching based on the critical path and a random mutation operator is developed. The performance of HDDE was experimented with 14 benchmark problems and the practical dye vat scheduling problem. The simulation results showed that the proposed algorithm has the strong global search capability and can effectively solve the practical lot splitting problems with equipment capacity constraints.

  20. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    Science.gov (United States)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  1. Note sur l'histoire démographique de Douelle (Lot) 1676-1914

    OpenAIRE

    Jean Fourastié

    1986-01-01

    Fourastié Jean. ? Note on the demographic history of Douelle (Lot) 1676-1914. This article summarizes the demographic data contained in a book about the village of Douelle in the department of the Lot. Both family reconstitution and genealogies have been used to ascertain the major demographic characteristics of this region during the 17th and 18th centuries : a high rate of endogamous marriages, few remarriages, declining birth rates before the Revolution, a very low number of illegitimate b...

  2. Solving a combined cutting-stock and lot-sizing problem with a column generating procedure

    DEFF Research Database (Denmark)

    Nonås, Sigrid Lise; Thorstenson, Anders

    2008-01-01

    In Nonås and Thorstenson [A combined cutting stock and lot sizing problem. European Journal of Operational Research 120(2) (2000) 327-42] a combined cutting-stock and lot-sizing problem is outlined under static and deterministic conditions. In this paper we suggest a new column generating solutio...... indicate that the procedure works well also for the extended cutting-stock problem with only a setup cost for each pattern change....

  3. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    Info, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  4. Coal-tar-based parking lot sealcoat: An unrecognized source of PAH to settled house dust

    Science.gov (United States)

    Mahler, B.J.; Van Metre, P.C.; Wilson, J.T.; Musgrove, M.; Burbank, T.L.; Ennis, T.E.; Bashara, T.J.

    2010-01-01

    Despite much speculation, the principal factors controlling concentrations of polycyclic aromatic hydrocarbons (PAH) in settled house dust (SHD) have not yet been identified. In response to recent reports that dust from pavement with coaltar-based sealcoat contains extremely high concentrations of PAH, we measured PAH in SHD from 23 apartments and in dust from their associated parking lots, one-half of which had coal-tar-based sealcoat (CT). The median concentration of total PAH (T-PAH) in dust from CT parking lots (4760 ??g/g, n = 11) was 530 times higher than that from parking lots with other pavement surface types (asphalt-based sealcoat, unsealed asphalt, concrete [median 9.0 ??g/g, n = 12]). T-PAH in SHD from apartments with CT parking lots (median 129 ??g/g) was 25 times higher than that in SHD from apartments with parking lots with other pavement surface types (median 5.1 ??g/g). Presence or absence of CT on a parking lot explained 48% of the variance in log-transformed T-PAH in SHD. Urban land-use intensity near the residence also had a significant but weaker relation to T-PAH. No other variables tested, including carpeting, frequency of vacuuming, and indoor burning, were significant. ?? 2010 American Chemical Society.

  5. Evaluation of coverage of enriched UF6 cylinder storage lots by existing criticality accident alarms

    International Nuclear Information System (INIS)

    Lee, B.L. Jr.; Dobelbower, M.C.; Woollard, J.E.; Sutherland, P.J.; Tayloe, R.W. Jr.

    1995-03-01

    The Portsmouth Gaseous Diffusion Plant (PORTS) is leased from the US Department of Energy (DOE) by the United States Enrichment Corporation (USEC), a government corporation formed in 1993. PORTS is in transition from regulation by DOE to regulation by the Nuclear Regulatory Commission (NRC). One regulation is 10 CFR Part 76.89, which requires that criticality alarm systems be provided for the site. PORTS originally installed criticality accident alarm systems in all building for which nuclear criticality accidents were credible. Currently, however, alarm systems are not installed in the enriched uranium hexafluoride (UF 6 ) cylinder storage lots. This report analyzes and documents the extent to which enriched UF 6 cylinder storage lots at PORTS are covered by criticality detectors and alarms currently installed in adjacent buildings. Monte Carlo calculations are performed on simplified models of the cylinder storage lots and adjacent buildings. The storage lots modelled are X-745B, X-745C, X745D, X-745E, and X-745F. The criticality detectors modelled are located in building X-343, the building X-344A/X-342A complex, and portions of building X-330 (see Figures 1 and 2). These criticality detectors are those located closest to the cylinder storage lots. Results of this analysis indicate that the existing criticality detectors currently installed at PORTS are largely ineffective in detecting neutron radiation from criticality accidents in most of the cylinder storage lots at PORTS, except sometimes along portions of their peripheries

  6. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  7. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  8. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  9. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  10. License - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG License License to Use This Database Last updated : 2013/08/07 You may use this database...se terms regarding the use of this database and the requirements you must follow in using this database. The license for this databas... Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: Ric...Japan is found here . With regard to this database, you are licensed to: freely access part or whole of this database..., and acquire data; freely redistribute part or whole of the data from this database; and freely create and distribute datab

  11. Galaxy Clusters, Near and Far, Have a Lot in Common

    Science.gov (United States)

    2005-04-01

    Using two orbiting X-ray telescopes, a team of international astronomers has examined distant galaxy clusters in order to compare them with their counterparts that are relatively close by. Speaking today at the RAS National Astronomy Meeting in Birmingham, Dr. Ben Maughan (Harvard-Smithsonian Center for Astrophysics), presented the results of this new analysis. The observations indicate that, despite the great expansion that the Universe has undergone since the Big Bang, galaxy clusters both local and distant have a great deal in common. This discovery could eventually lead to a better understanding of how to "weigh" these enormous structures, and, in so doing, answer important questions about the nature and structure of the Universe. Clusters of galaxies, the largest known gravitationally-bound objects, are the knots in the cosmic web of structure that permeates the Universe. Theoretical models make predictions about the number, distribution and properties of these clusters. Scientists can test and improve models of the Universe by comparing these predictions with observations. The most powerful way of doing this is to measure the masses of galaxy clusters, particularly those in the distant Universe. However, weighing galaxy clusters is extremely difficult. One relatively easy way to weigh a galaxy cluster is to use simple laws ("scaling relations") to estimate its weight from properties that are easy to observe, like its luminosity (brightness) or temperature. This is like estimating someone's weight from their height if you didn't have any scales. Over the last 3 years, a team of researchers, led by Ben Maughan, has observed 11 distant galaxy clusters with ESA's XMM-Newton and NASA's Chandra X-ray Observatory. The clusters have redshifts of z = 0.6-1.0, which corresponds to distances of 6 to 8 billion light years. This means that we see them as they were when the Universe was half its present age. The survey included two unusual systems, one in which two massive

  12. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  13. Parcels and Land Ownership, Square-mile, section-wide, property ownerhip parcel and lot-block boundaries. Includes original platted lot lines. These coverages are maintained interactively by GIS staff. Primary attributes include Parcel IDS (Control, Key, and PIN), platted lot and, Published in 2008, 1:1200 (1in=100ft) scale, Sedgwick County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Parcels and Land Ownership dataset current as of 2008. Square-mile, section-wide, property ownerhip parcel and lot-block boundaries. Includes original platted lot...

  14. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  15. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    Science.gov (United States)

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that

  16. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  17. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  18. Ammonia losses and nitrogen partitioning at a southern High Plains open lot dairy

    Science.gov (United States)

    Todd, Richard W.; Cole, N. Andy; Hagevoort, G. Robert; Casey, Kenneth D.; Auvermann, Brent W.

    2015-06-01

    Animal agriculture is a significant source of ammonia (NH3). Cattle excrete most ingested nitrogen (N); most urinary N is converted to NH3, volatilized and lost to the atmosphere. Open lot dairies on the southern High Plains are a growing industry and face environmental challenges as well as reporting requirements for NH3 emissions. We quantified NH3 emissions from the open lot and wastewater lagoons of a commercial New Mexico dairy during a nine-day summer campaign. The 3500-cow dairy consisted of open lot, manure-surfaced corrals (22.5 ha area). Lactating cows comprised 80% of the herd. A flush system using recycled wastewater intermittently removed manure from feeding alleys to three lagoons (1.8 ha area). Open path lasers measured atmospheric NH3 concentration, sonic anemometers characterized turbulence, and inverse dispersion analysis was used to quantify emissions. Ammonia fluxes (15-min) averaged 56 and 37 μg m-2 s-1 at the open lot and lagoons, respectively. Ammonia emission rate averaged 1061 kg d-1 at the open lot and 59 kg d-1 at the lagoons; 95% of NH3 was emitted from the open lot. The per capita emission rate of NH3 was 304 g cow-1 d-1 from the open lot (41% of N intake) and 17 g cow-1 d-1 from lagoons (2% of N intake). Daily N input at the dairy was 2139 kg d-1, with 43, 36, 19 and 2% of the N partitioned to NH3 emission, manure/lagoons, milk, and cows, respectively.

  19. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  20. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  1. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  2. INTEGRATION OF PRODUCTION AND SUPPLY IN THE LEAN MANUFACTURING CONDITIONS ACCORDING TO THE LOT FOR LOT METHOD LOGIC - RESULTS OF RESEARCH

    Directory of Open Access Journals (Sweden)

    Roman Domański

    2015-12-01

    Full Text Available Background: The review of literature and observations of business practice indicate that integration of production and supply is not a well-developed area of science. The author notes that the publications on the integration most often focus on selected detailed aspects and are rather postulative in character. This is accompanied by absence of specific utilitarian solutions (tools which could be used in business practice. Methods: The research was conducted between 2009 and 2010 in a company in Wielkopolska which operates in the machining sector. The solution of the research problem is based on the author's own concept - the integration model. The cost concept of the solution was built and verified (case study on the basis of conditions of a given enterprise (industrial data. Results: Partial verifiability of results was proved in the entire set of selected material indexes (although in two cases out of three the costs differences to the disadvantage of the lot-for-lot method were small. In case of structure of studied product range, a significant conformity of results in the order of 67% was achieved for items typically characteristic for the LfL method (group AX. Conclusions: The formulated research problem and the result of its solution (only 6 material items demand a lot (orthodoxy in terms of implementation conditions. The concept of the solution has a narrow field of application in the selected organizational conditions (studied enterprise. It should be verified by independent studies of this kind at other enterprises.

  3. Deflection test evaluation of different lots of the same nickel-titanium wire commercial brand

    Directory of Open Access Journals (Sweden)

    Murilo Gaby Neves

    2016-02-01

    Full Text Available Introduction: The aim of this in vitro study was to compare the elastic properties of the load-deflection ratio of orthodontic wires of different lot numbers and the same commercial brand. Methods: A total of 40 nickel-titanium (NiTi wire segments (Morelli OrtodontiaTM - Sorocaba, SP, Brazil, 0.016-in in diameter were used. Groups were sorted according to lot numbers (lots 1, 2, 3 and 4. 28-mm length segments from the straight portion (ends of archwires were used. Deflection tests were performed in an EMIC universal testing machine with 5-N load cell at 1 mm/minute speed. Force at deactivation was recorded at 0.5, 1, 2 and 3 mm deflection. Analysis of variance (ANOVA was used to compare differences between group means. Results: When comparing the force of groups at the same deflection (3, 2 and 1 mm, during deactivation, no statistical differences were found. Conclusion: There are no changes in the elastic properties of different lots of the same commercial brand; thus, the use of different lots of the orthodontic wires used in this research does not compromise the final outcomes of the load-deflection ratio.

  4. Deflection test evaluation of different lots of the same nickel-titanium wire commercial brand.

    Science.gov (United States)

    Neves, Murilo Gaby; Lima, Fabrício Viana Pereira; Gurgel, Júlio de Araújo; Pinzan-Vercelino, Célia Regina Maio; Rezende, Fernanda Soares; Brandão, Gustavo Antônio Martins

    2016-01-01

    The aim of this in vitro study was to compare the elastic properties of the load-deflection ratio of orthodontic wires of different lot numbers and the same commercial brand. A total of 40 nickel-titanium (NiTi) wire segments (Morelli Ortodontia™--Sorocaba, SP, Brazil), 0.016-in in diameter were used. Groups were sorted according to lot numbers (lots 1, 2, 3 and 4). 28-mm length segments from the straight portion (ends) of archwires were used. Deflection tests were performed in an EMIC universal testing machine with 5-N load cell at 1 mm/minute speed. Force at deactivation was recorded at 0.5, 1, 2 and 3 mm deflection. Analysis of variance (ANOVA) was used to compare differences between group means. When comparing the force of groups at the same deflection (3, 2 and 1 mm), during deactivation, no statistical differences were found. There are no changes in the elastic properties of different lots of the same commercial brand; thus, the use of different lots of the orthodontic wires used in this research does not compromise the final outcomes of the load-deflection ratio.

  5. Selection of seed lots of Pinus taeda L. for tissue culture

    Directory of Open Access Journals (Sweden)

    Diego Pascoal Golle

    2014-06-01

    Full Text Available The aim of this work was to identify the fungi genera associated with three Pinus taeda L. seed lots and to assess the sanitary and physiological quality of these lots for use as selection criteria for tissue culture and evaluate the in vitro establishment of explants from seminal origin in different nutritive media. It was possible to discriminate the lots on the sanitary and physiological quality, as well as to establish in vitro plants of Pinus taeda from cotyledonary nodes obtained from aseptic seed germination of a selected lot by the sanitary and physiological quality higher. The nutritive media MS, ½ MS and WPM were equally suitable for this purpose. For the sanitary analysis the fungal genera Fusarium, Penicillium and Trichoderma were those of the highest sensitivity. For the physiological evaluation were important the variables: abnormal seedlings, strong normal seedlings; length, fresh and dry weight of strong normal seedlings. The analyzes were favorable to choose lots of seeds for in vitro culture and all culture media were adequate for the establishment of this species in tissue culture.

  6. Distributed Database Storage Solution in Java

    OpenAIRE

    Funck, Johan

    2010-01-01

    Car sales companies have in the last couple of years discovered that there is a big market in storing their customer's summer and winter tires for a small fee. For the customers it is very convenient to get rid of the all known storage problem with season tires. Burlin Motor Umeå is one of these companies and they are offering seasonal storage and change of tires in autumn and spring as well as washing of tires.The main problem for this kind of storage is how to make the storage easy to overv...

  7. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  8. Monitoring of services with non-relational databases and map-reduce framework

    International Nuclear Information System (INIS)

    Babik, M; Souto, F

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  9. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  10. Neuro-ophthalmology of late-onset Tay-Sachs disease (LOTS).

    Science.gov (United States)

    Rucker, J C; Shapiro, B E; Han, Y H; Kumar, A N; Garbutt, S; Keller, E L; Leigh, R J

    2004-11-23

    Late-onset Tay-Sachs disease (LOTS) is an adult-onset, autosomal recessive, progressive variant of GM2 gangliosidosis, characterized by involvement of the cerebellum and anterior horn cells. To determine the range of visual and ocular motor abnormalities in LOTS, as a prelude to evaluating the effectiveness of novel therapies. Fourteen patients with biochemically confirmed LOTS (8 men; age range 24 to 53 years; disease duration 5 to 30 years) and 10 age-matched control subjects were studied. Snellen visual acuity, contrast sensitivity, color vision, stereopsis, and visual fields were measured, and optic fundi were photographed. Horizontal and vertical eye movements (search coil) were recorded, and saccades, pursuit, vestibulo-ocular reflex (VOR), vergence, and optokinetic (OK) responses were measured. All patients showed normal visual functions and optic fundi. The main eye movement abnormality concerned saccades, which were "multistep," consisting of a series of small saccades and larger movements that showed transient decelerations. Larger saccades ended earlier and more abruptly (greater peak deceleration) in LOTS patients than in control subjects; these changes can be attributed to premature termination of the saccadic pulse. Smooth-pursuit and slow-phase OK gains were reduced, but VOR, vergence, and gaze holding were normal. Patients with late-onset Tay-Sachs disease (LOTS) show characteristic abnormalities of saccades but normal afferent visual systems. Hypometria, transient decelerations, and premature termination of saccades suggest disruption of a "latch circuit" that normally inhibits pontine omnipause neurons, permitting burst neurons to discharge until the eye movement is completed. These measurable abnormalities of saccades provide a means to evaluate the effects of novel treatments for LOTS.

  11. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  12. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  13. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  14. AD620SQ/883B Total Ionizing Dose Radiation Lot Acceptance Report for RESTORE-LEO

    Science.gov (United States)

    Burton, Noah; Campola, Michael

    2017-01-01

    A Radiation Lot Acceptance Test was performed on the AD620SQ/883B, Lot 1708D, in accordance with MIL-STD-883, Method 1019, Condition D. Using a Co-60 source 4 biased parts and 4 unbiased parts were irradiated at 10 mrad/s (0.036 krad/hr) in intervals of approximately 1 krad from 3-10 krads, and ones of 5 krads from 10-25 krads, where it was annealed while unbiased at 25 degrees Celsius, for 2 days, and then, subsequently, annealed while biased at 25 degrees celsius, for another 7 days.

  15. Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45

    Science.gov (United States)

    Chin, Liem; Chendra, Erwinna; Sukmana, Agus

    2018-01-01

    To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.

  16. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  17. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  18. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  19. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  20. Thermo-hydro-geochemical modelling of the bentonite buffer. LOT A2 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sena, Clara; Salas, Joaquin; Arcos, David (Amphos 21 Consulting S.L., Barcelona (Spain))

    2010-12-15

    The Swedish Nuclear Fuel and waste management company (SKB) is conducting a series of long term buffer material (LOT) tests at the Aespoe Hard Rock Laboratory (HRL) to test the behaviour of the bentonite buffer under conditions similar to those expected in a KBS-3 deep geological repository for high level nuclear waste (HLNW). In the present work a numerical model is developed to simulate (i) the thermo-hydraulic, (ii) transport and (iii) geochemical processes that have been observed in the LOT A2 test parcel. The LOT A2 test lasted approximately 6 years, and consists of a 4 m long vertical borehole drilled in diorite rock, from the ground of the Aespoe HRL tunnel. The borehole is composed of a central heater, maintained at 130 deg C in the lower 2 m of the borehole, a copper tube surrounding the heater and a 100 mm thick ring of pre-compacted Wyoming MX-80 bentonite around the copper tube /Karnland et al. 2009/. The numerical model developed here is a 1D axis-symmetric model that simulates the water saturation of the bentonite under a constant thermal gradient; the transport of solutes; and, the geochemical reactions observed in the bentonite blocks. Two cases have been modelled, one considering the highest temperature reached by the bentonite (at 3 m depth in the borehole, where temperatures of 130 and 85 deg C have been recorded near the copper tube and near the granitic host rock, respectively) and the other case assuming a constant temperature of 25 deg C, representing the upper part of borehole, where the bentonite has not been heated. In the LOT A2 test, the initial partially saturated bentonite becomes progressively water saturated, due to the injection of Aespoe granitic groundwater at granite - bentonite interface. The transport of solutes during the bentonite water saturation stage is believed to be controlled by water uptake from the surrounding groundwater to the wetting front and, additionally, in the case of heated bentonite, by a cyclic evaporation

  1. Thermo-hydro-geochemical modelling of the bentonite buffer. LOT A2 experiment

    International Nuclear Information System (INIS)

    Sena, Clara; Salas, Joaquin; Arcos, David

    2010-12-01

    The Swedish Nuclear Fuel and waste management company (SKB) is conducting a series of long term buffer material (LOT) tests at the Aespoe Hard Rock Laboratory (HRL) to test the behaviour of the bentonite buffer under conditions similar to those expected in a KBS-3 deep geological repository for high level nuclear waste (HLNW). In the present work a numerical model is developed to simulate (i) the thermo-hydraulic, (ii) transport and (iii) geochemical processes that have been observed in the LOT A2 test parcel. The LOT A2 test lasted approximately 6 years, and consists of a 4 m long vertical borehole drilled in diorite rock, from the ground of the Aespoe HRL tunnel. The borehole is composed of a central heater, maintained at 130 deg C in the lower 2 m of the borehole, a copper tube surrounding the heater and a 100 mm thick ring of pre-compacted Wyoming MX-80 bentonite around the copper tube /Karnland et al. 2009/. The numerical model developed here is a 1D axis-symmetric model that simulates the water saturation of the bentonite under a constant thermal gradient; the transport of solutes; and, the geochemical reactions observed in the bentonite blocks. Two cases have been modelled, one considering the highest temperature reached by the bentonite (at 3 m depth in the borehole, where temperatures of 130 and 85 deg C have been recorded near the copper tube and near the granitic host rock, respectively) and the other case assuming a constant temperature of 25 deg C, representing the upper part of borehole, where the bentonite has not been heated. In the LOT A2 test, the initial partially saturated bentonite becomes progressively water saturated, due to the injection of Aespoe granitic groundwater at granite - bentonite interface. The transport of solutes during the bentonite water saturation stage is believed to be controlled by water uptake from the surrounding groundwater to the wetting front and, additionally, in the case of heated bentonite, by a cyclic evaporation

  2. Safety, immunogenicity, and lot-to-lot consistency of a quadrivalent inactivated influenza vaccine in children, adolescents, and adults: A randomized, controlled, phase III trial.

    Science.gov (United States)

    Cadorna-Carlos, Josefina B; Nolan, Terry; Borja-Tabora, Charissa Fay; Santos, Jaime; Montalban, M Cecilia; de Looze, Ferdinandus J; Eizenberg, Peter; Hall, Stephen; Dupuy, Martin; Hutagalung, Yanee; Pépin, Stéphanie; Saville, Melanie

    2015-05-15

    Inactivated quadrivalent influenza vaccine (IIV4) containing two influenza A strains and one strain from each B lineage (Yamagata and Victoria) may offer broader protection against seasonal influenza than inactivated trivalent influenza vaccine (IIV3), containing a single B strain. This study examined the safety, immunogenicity, and lot consistency of an IIV4 candidate. This phase III, randomized, controlled, multicenter trial in children/adolescents (9 through 17 years) and adults (18 through 60 years) was conducted in Australia and in the Philippines in 2012. The study was double-blind for IIV4 lots and open-label for IIV4 vs IIV3. Children/adolescents were randomized 2:2:2:1 and adults 10:10:10:1 to receive one of three lots of IIV4 or licensed IIV3. Safety data were collected for up to 6 months post-vaccination. Hemagglutination inhibition and seroneutralization antibody titers were assessed pre-vaccination and 21 days post-vaccination. 1648 adults and 329 children/adolescents received IIV4, and 56 adults and 55 children/adolescents received IIV3. Solicited reactions, unsolicited adverse events, and serious adverse events were similar for IIV3 and IIV4 recipients in both age groups. Injection-site pain, headache, malaise, and myalgia were the most frequently reported solicited reactions, most of which were mild and resolved within 3 days. No vaccine-related serious adverse events or deaths were reported. Post-vaccination antibody responses, seroconversion rates, and seroprotection rates for the 3 strains common to both vaccines were comparable for IIV3 and IIV4 in both age groups. Antibody responses to IIV4 were equivalent among vaccine lots and comparable between age groups for each of the 4 strains. IIV4 met all European Medicines Agency immunogenicity criteria for adults for all 4 strains. In both age groups, IIV4 was well tolerated and caused no safety concerns, induced robust antibody responses to all 4 influenza strains, and met all EMA immunogenicity

  3. Determination of supplier-to-supplier and lot-to-lot variability in glycation of recombinant human serum albumin expressed in Oryza sativa.

    Directory of Open Access Journals (Sweden)

    Grant E Frahm

    Full Text Available The use of different expression systems to produce the same recombinant human protein can result in expression-dependent chemical modifications (CMs leading to variability of structure, stability and immunogenicity. Of particular interest are recombinant human proteins expressed in plant-based systems, which have shown particularly high CM variability. In studies presented here, recombinant human serum albumins (rHSA produced in Oryza sativa (Asian rice (OsrHSA from a number of suppliers have been extensively characterized and compared to plasma-derived HSA (pHSA and rHSA expressed in yeast (Pichia pastoris and Saccharomyces cerevisiae. The heterogeneity of each sample was evaluated using size exclusion chromatography (SEC, reversed-phase high-performance liquid chromatography (RP-HPLC and capillary electrophoresis (CE. Modifications of the samples were identified by liquid chromatography-mass spectrometry (LC-MS. The secondary and tertiary structure of the albumin samples were assessed with far U/V circular dichroism spectropolarimetry (far U/V CD and fluorescence spectroscopy, respectively. Far U/V CD and fluorescence analyses were also used to assess thermal stability and drug binding. High molecular weight aggregates in OsrHSA samples were detected with SEC and supplier-to-supplier variability and, more critically, lot-to-lot variability in one manufactures supplied products were identified. LC-MS analysis identified a greater number of hexose-glycated arginine and lysine residues on OsrHSA compared to pHSA or rHSA expressed in yeast. This analysis also showed supplier-to-supplier and lot-to-lot variability in the degree of glycation at specific lysine and arginine residues for OsrHSA. Both the number of glycated residues and the degree of glycation correlated positively with the quantity of non-monomeric species and the chromatographic profiles of the samples. Tertiary structural changes were observed for most OsrHSA samples which

  4. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  5. Hydrologic and Pollutant Removal Performance of a Full-Scale, Fully Functional Permeable Pavement Parking Lot

    Science.gov (United States)

    In accordance with the need for full-scale, replicated studies of permeable pavement systems used in their intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage conditions, and maintenance regimes to evaluate these systems, the EPA’s Urb...

  6. Meta-Heuristics for Dynamic Lot Sizing: a review and comparison of solution approaches

    NARCIS (Netherlands)

    R.F. Jans (Raf); Z. Degraeve (Zeger)

    2004-01-01

    textabstractProofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search,

  7. 9 CFR 351.19 - Refusal of certification for specific lots.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Refusal of certification for specific lots. 351.19 Section 351.19 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION...

  8. Performance of engineered soil and trees in a parking lot bioswale

    Science.gov (United States)

    Qingfu Xiao; Gregory McPherson

    2011-01-01

    A bioswale integrating an engineered soil and trees was installed in a parking lot to evaluate its ability to reduce storm runoff, pollutant loading, and support tree growth. The adjacent control and treatment sites each received runoff from eight parking spaces and were identical except that there was no bioswale for the control site. A tree was planted at both sites...

  9. 78 FR 43753 - Inspection and Weighing of Grain in Combined and Single Lots

    Science.gov (United States)

    2013-07-22

    ... USGSA regulations for shiplots, unit trains, and lash barges. This final rule allows for breaks in... the loading of the lot must be reasonably continuous, with no consecutive break in loading to exceed... superseded; (iii) The location of the grain, if at rest, or the name(s) of the elevator(s) from which or into...

  10. How well do we understand nitrous oxide emissions from open-lot cattle systems?

    Science.gov (United States)

    Nitrous oxide is an important greenhouse gas that is produced in manure. Open lot beef cattle feedyards emit nitrous oxide but little information is available about exactly how much is produced. This has become an important research topic because of environmental concerns. Only a few methods are ava...

  11. The finite horizon economic lot sizing problem in job shops : the multiple cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Bertrand, J.W.M.

    2001-01-01

    This paper addresses the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the planning horizon length is finite and fixed by management. The objective pursued is to minimize the sum of setup costs, and work-in-process and

  12. Manufacturability: from design to SPC limits through "corner-lot" characterization

    Science.gov (United States)

    Hogan, Timothy J.; Baker, James C.; Wesneski, Lisa; Black, Robert S.; Rothenbury, Dave

    2005-01-01

    Texas Instruments" Digital Micro-mirror Device, is used in a wide variety of optical display applications ranging from fixed and portable projectors to high-definition television (HDTV) to digital cinema projection systems. A new DMD pixel architecture, called "FTP", was designed and qualified by Texas Instruments DLPTMTM Group in 2003 to meet increased performance objectives for brightness and contrast ratio. Coordination between design, test and fabrication groups was required to balance pixel performance requirements and manufacturing capability. "Corner Lot" designed experiments (DOE) were used to verify "fabrication space" available for the pixel design. The corner lot technique allows confirmation of manufacturability projections early in the design/qualification cycle. Through careful design and analysis of the corner-lot DOE, a balance of critical dimension (cd) "budgets" is possible so that specification and process control limits can be established that meet both customer and factory requirements. The application of corner-lot DOE is illustrated in a case history of the DMD "FTP" pixel. The process for balancing test parameter requirements with multiple critical dimension budgets is shown. MEMS/MOEMS device design and fabrication can use similar techniques to achieve agressive design-to-qualification goals.

  13. Flexible interaction of plug-in electric vehicle parking lots for efficient wind integration

    International Nuclear Information System (INIS)

    Heydarian-Forushani, E.; Golshan, M.E.H.; Shafie-khah, M.

    2016-01-01

    Highlights: • Interactive incorporation of plug-in electric vehicle parking lots is investigated. • Flexible energy and reserve services are provided by electric vehicle parking lots. • Uncertain characterization of electric vehicle owners’ behavior is taken into account. • Coordinated operation of parking lots can facilitate wind power integration. - Abstract: The increasing share of uncertain wind generation has changed traditional operation scheduling of power systems. The challenges of this additional variability raise the need for an operational flexibility in providing both energy and reserve. One key solution is an effective incorporation of plug-in electric vehicles (PEVs) into the power system operation process. To this end, this paper proposes a two-stage stochastic programming market-clearing model considering the network constraints to achieve the optimal scheduling of conventional units as well as PEV parking lots (PLs) in providing both energy and reserve services. Different from existing works, the paper pays more attention to the uncertain characterization of PLs takes into account the arrival/departure time of PEVs to/from the PL, the initial state of charge (SOC) of PEVs, and their battery capacity through a set of scenarios in addition to wind generation scenarios. The results reveal that although the cost saving as a consequence of incorporating PL to the grid is below 1% of total system cost, however, flexible interactions of PL in the energy and reserve markets can promote the integration of wind power more than 13.5%.

  14. A basic period approach to the economic lot scheduling problem with shelf life considerations

    NARCIS (Netherlands)

    Soman, C.A.; van Donk, D.P.; Gaalman, G.J.C.

    2004-01-01

    Almost all the research on the economic lot scheduling problem (ELSP) considering limited shelf life of products has assumed a common cycle approach and an unrealistic assumption of possibility of deliberately reducing the production rate. In many cases, like in food processing industry where

  15. Sequencing, lot sizing and scheduling in job shops: the common cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Boctor, F.F.

    1998-01-01

    This paper deals with the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the objective is to minimize the sum of setup and inventory holding costs while satisfying the demand with no backlogging. To solve this problem, we

  16. Aligning workload control theory and practice : lot splitting and operation overlapping issues

    NARCIS (Netherlands)

    Fernandes, Nuno O.; Land, Martin J.; Carmo-Silva, S.

    2016-01-01

    This paper addresses the problem of lot splitting in the context of workload control (WLC). Past studies on WLC assumed that jobs released to the shop floor proceed through the different stages of processing without being split. However, in practice, large jobs are often split into smaller transfer

  17. Precipitation and runoff water quality from an urban parking lot and implications for tree growth

    Science.gov (United States)

    C. H. Pham; H. G. Halverson; G. M. Heisler

    1978-01-01

    The water quality of precipitation and runoff from a large parking lot in New Brunswick, New Jersey was studied during the early growing season, from March to June 1976. Precipitation and runoff from 10 storms were analyzed. The runoff was higher in all constituents considered except for P, Pb, and Cu. Compared with published values for natural waters, sewage effluent...

  18. 7 CFR 56.37 - Lot marking of officially identified shell eggs.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Lot marking of officially identified shell eggs. 56.37... AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT (CONTINUED) VOLUNTARY GRADING OF SHELL EGGS Grading of Shell Eggs Identifying and Marking Products § 56.37...

  19. Rapid assessment of antimicrobial resistance prevalence using a Lot Quality Assurance sampling approach

    NARCIS (Netherlands)

    van Leth, Frank; den Heijer, Casper; Beerepoot, Marielle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance

    2017-01-01

    Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates

  20. Alternate Methods of Effluent Disposal for On-Lot Home Sewage Systems. Special Circular 214.

    Science.gov (United States)

    Wooding, N. Henry

    This circular provides current information for homeowners who must repair or replace existing on-lot sewage disposal systems. Several alternatives such as elevated sand mounds, sand-lined beds and trenches and oversized absorption areas are discussed. Site characteristics and preparation are outlined. Each alternative is accompanied by a diagram…

  1. An approach using Lagrangian/surrogate relaxation for lot-sizing with transportation costs

    Directory of Open Access Journals (Sweden)

    Flavio Molina

    2009-08-01

    Full Text Available The aim of this work was to study a distribution and lot-sizing problem that considers costs with transportation to a company warehouse as well as, inventory, production and setup costs. The logistic costs are associated with necessary containers to pack produced items. The company negotiates a long-term contract in which a fixed cost per period is associated with the transportation of the items. On the other hand, a limited number of containers are available with a lower cost than the average cost. If an occasional demand increase occurs, other containers can be utilized; however, their costs are higher. A mathematical model was proposed in the literature and solved using the Lagrangian heuristic. Here, the use of the Lagrangian/surrogate heuristic to solve the problem is evaluated. Moreover, an extension of the literature model is considered adding capacity constraints and allowing backlogging. Computational tests show that Lagrangian/surrogate heuristics are competitive, especially when the capacity constraints are tight.Neste trabalho estuda-se um problema de dimensionamento de lotes e distribuição que envolve além de custos de estoques, produção e preparação, custos de transportes para o armazém da empresa. Os custos logísticos estão associados aos contêineres necessários para empacotar os produtos produzidos. A empresa negocia um contrato de longo prazo onde um custo fixo por período é associado ao transporte dos itens, em contrapartida um limite de contêineres é disponibilizado com custo mais baixo que o custo padrão. Caso ocorra um aumento ocasional de demanda, novos contêineres podem ser utilizados, no entanto, seu custo é mais elevado. Um modelo matemático foi proposto na literatura e resolvido utilizando uma heurística Lagrangiana. No presente trabalho a resolução do problema por uma heurística Lagrangiana/surrogate é avaliada. Além disso, é considerada uma extensão do modelo da literatura adicionando

  2. Parking Lot Runoff Quality and Treatment Efficiency of a Stormwater-Filtration Device, Madison, Wisconsin, 2005-07

    Science.gov (United States)

    Horwatich, Judy A.; Bannerman, Roger T.

    2010-01-01

    To evaluate the treatment efficiency of a stormwater-filtration device (SFD) for potential use at Wisconsin Department of Transportation (WisDOT) park-and-ride facilities, a SFD was installed at an employee parking lot in downtown Madison, Wisconsin. This type of parking lot was chosen for the test site because the constituent concentrations and particle-size distributions (PSDs) were expected to be similar to those of a typical park-and-ride lot operated by WisDOT. The objective of this particular installation was to reduce loads of total suspended solids (TSS) in stormwater runoff to Lake Monona. This study also was designed to provide a range of treatment efficiencies expected for a SFD. Samples from the inlet and outlet were analyzed for 33 organic and inorganic constituents, including 18 polycyclic aromatic hydrocarbons (PAHs). Samples were also analyzed for physical properties, including PSD. Water-quality samples were collected for 51 runoff events from November 2005 to August 2007. Samples from all runoff events were analyzed for concentrations of suspended sediment (SS). Samples from 31 runoff events were analyzed for 15 constituents, samples from 15 runoff events were analyzed for PAHs, and samples from 36 events were analyzed for PSD. The treatment efficiency of the SFD was calculated using the summation of loads (SOL) and the efficiency ratio methods. Constituents for which the concentrations and (or) loads were decreased by the SFD include TSS, SS, volatile suspended solids, total phosphorous (TP), total copper, total zinc, and PAHs. The efficiency ratios for these constituents are 45, 37, 38, 55, 22, 5, and 46 percent, respectively. The SOLs for these constituents are 32, 37, 28, 36, 23, 8, and 48 percent, respectively. The SOL for chloride was -21 and the efficiency ratio was -18. Six chemical constituents or properties-dissolved phosphorus, chemical oxygen demand, dissolved zinc, total dissolved solids, dissolved chemical oxygen demand, and

  3. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    Science.gov (United States)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  4. Application of lot quality assurance sampling for leprosy elimination monitoring--examination of some critical factors.

    Science.gov (United States)

    Gupte, M D; Murthy, B N; Mahmood, K; Meeralakshmi, S; Nagaraju, B; Prabhakaran, R

    2004-04-01

    The concept of elimination of an infectious disease is different from eradication and in a way from control as well. In disease elimination programmes the desired reduced level of prevalence is set up as the target to be achieved in a practical time frame. Elimination can be considered in the context of national or regional levels. Prevalence levels depend on occurrence of new cases and thus could remain fluctuating. There are no ready pragmatic methods to monitor the progress of leprosy elimination programmes. We therefore tried to explore newer methods to answer these demands. With the lowering of prevalence of leprosy to the desired level of 1 case per 10000 population at the global level, the programme administrators' concern will be shifted to smaller areas e.g. national and sub-national levels. For monitoring this situation, we earlier observed that lot quality assurance sampling (LQAS), a quality control tool in industry was useful in the initially high endemic areas. However, critical factors such as geographical distribution of cases and adoption of cluster sampling design instead of simple random sampling design deserve attention before LQAS could generally be recommended. The present exercise was aimed at validating applicability of LQAS, and adopting these modifications for monitoring leprosy elimination in Tamil Nadu state, which was highly endemic for leprosy. A representative sample of 64000 people drawn from eight districts of Tamil Nadu state, India, with maximum allowable number of 25 cases was considered, using LQAS methodology to test whether leprosy prevalence was at or below 7 per 10000 population. Expected number of cases for each district was obtained assuming Poisson distribution. Goodness of fit for the observed and expected cases (closeness of the expected number of cases to those observed) was tested through chi(2). Enhancing factor (design effect) for sample size was obtained by computing the intraclass correlation. The survey actually

  5. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2011-01-01

    There is a lot at stake for administrators taking care of servers, since they house sensitive data like credit cards, social security numbers, medical records, and much more. In Securing SQL Server you will learn about the potential attack vectors that can be used to break into your SQL Server database, and how to protect yourself from these attacks. Written by a Microsoft SQL Server MVP, you will learn how to properly secure your database, from both internal and external threats. Best practices and specific tricks employed by the author will also be revealed. Learn expert techniques to protec

  6. Joint Economic Lot Sizing Optimization in a Supplier-Buyer Inventory System When the Supplier Offers Decremental Temporary Discounts

    Directory of Open Access Journals (Sweden)

    Diana Puspita Sari

    2012-02-01

    Full Text Available This research discusses mathematical models of joint economic lot size optimization in a supplier-buyer inventory system in a situation when the supplier offers decremental temporary discounts during a sale period. Here, the sale period consists of n phases and the phases of discounts offered descend as much as the number of phases. The highest discount will be given when orders are placed in the first phase while the lowest one will be given when they are placed in the last phase. In this situation, the supplier attempts to attract the buyer to place orders as early as possible during the sale period. The buyers will respon these offers by ordering a special quantity in one of the phase. In this paper, we propose such a forward buying model with discount-proportionally-distributed time phases. To examine the behaviour of the proposed model, we conducted numerical experiments. We assumed that there are three phases of discounts during the sale period. We then compared the total joint costs of special order placed in each phase for two scenarios. The first scenario is the case of independent situation – there is no coordination between the buyer and the supplie-, while the second scenario is the opposite one, the coordinated model. Our results showed the coordinated model outperform the independent model in terms of producing total joint costs. We finally conducted a sensitivity analyzis to examine the other behaviour of the proposed model. Keywords: supplier-buyer inventory system, forward buying model, decremental temporary discounts, joint economic lot sizing, optimization.

  7. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  8. BGDB: a database of bivalent genes.

    Science.gov (United States)

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  9. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  10. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  11. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  12. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  13. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  14. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  15. Chemical analysis of DC745 Materials: DEV Lot 1 reinvestigation; barcodes P053387, P053388, and P053389

    Energy Technology Data Exchange (ETDEWEB)

    Dirmyer, Matthew R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-09

    This report serves as a follow up to our initial development lot 1 chemical analysis report (LA-UR-16-21970). The purpose of that report was to determine whether or not certain combinations of resin lots and curing agent lots resulted in chemical differences in the final material. One finding of that report suggested that pad P053389 was different from the three other pads analyzed. This report consists of chemical analysis of P053387, P053388, and a reinvestigation of P053389 all of which came from the potentially suspect combination of resin and curing agents lot. The goal of this report is to determine whether the observations relating to P053389 were isolated to that particular pad or systemic to that combination of resin and curing agent lot. The following suite of analyses were performed on the pads: Differential Scanning Calorimetry (DSC), Thermogravimetric Analysis (TGA), Fourier Transform Infrared Spectroscopy (FT-IR), and Solid State Nuclear Magnetic Resonance (NMR). The overall conclusions of the study are that pads P053387 and P053388 behave more consistently with the pads of other resin lot and curing agent lot combinations and that the chemical observations made regarding pad P053389 are isolated to that pad and not representative of an issue with that resin lot and curing agent lot combination.

  16. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  17. Aviation Safety Issues Database

    Science.gov (United States)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  18. LSD: Large Survey Database framework

    Science.gov (United States)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  19. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  20. Inleiding database-systemen

    NARCIS (Netherlands)

    Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.

    1993-01-01

    Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex

  1. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  2. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  3. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  4. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  5. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  6. Five years database of landslides and floods affecting Swiss transportation networks

    Science.gov (United States)

    Voumard, Jérémie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    Switzerland is a country threatened by a lot of natural hazards. Many events occur in built environment, affecting infrastructures, buildings or transportation networks and producing occasionally expensive damages. This is the reason why large landslides are generally well studied and monitored in Switzerland to reduce the financial and human risks. However, we have noticed a lack of data on small events which have impacted roads and railways these last years. This is why we have collect all the reported natural hazard events which have affected the Swiss transportation networks since 2012 in a database. More than 800 roads and railways closures have been recorded in five years from 2012 to 2016. These event are classified into six classes: earth flow, debris flow, rockfall, flood, avalanche and others. Data come from Swiss online press articles sorted by Google Alerts. The search is based on more than thirty keywords, in three languages (Italian, French, German). After verifying that the article relates indeed an event which has affected a road or a railways track, it is studied in details. We get finally information on about sixty attributes by event about event date, event type, event localisation, meteorological conditions as well as impacts and damages on the track and human damages. From this database, many trends over the five years of data collection can be outlined: in particular, the spatial and temporal distributions of the events, as well as their consequences in term of traffic (closure duration, deviation, etc.). Even if the database is imperfect (by the way it was built and because of the short time period considered), it highlights the not negligible impact of small natural hazard events on roads and railways in Switzerland at a national level. This database helps to better understand and quantify this events, to better integrate them in risk assessment.

  7. Statistical validation of reagent lot change in the clinical chemistry laboratory can confer insights on good clinical laboratory practice.

    Science.gov (United States)

    Cho, Min-Chul; Kim, So Young; Jeong, Tae-Dong; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2014-11-01

    Verification of new lot reagent's suitability is necessary to ensure that results for patients' samples are consistent before and after reagent lot changes. A typical procedure is to measure results of some patients' samples along with quality control (QC) materials. In this study, the results of patients' samples and QC materials in reagent lot changes were analysed. In addition, the opinion regarding QC target range adjustment along with reagent lot changes was proposed. Patients' sample and QC material results of 360 reagent lot change events involving 61 analytes and eight instrument platforms were analysed. The between-lot differences for the patients' samples (ΔP) and the QC materials (ΔQC) were tested by Mann-Whitney U tests. The size of the between-lot differences in the QC data was calculated as multiples of standard deviation (SD). The ΔP and ΔQC values only differed significantly in 7.8% of the reagent lot change events. This frequency was not affected by the assay principle or the QC material source. One SD was proposed for the cutoff for maintaining pre-existing target range after reagent lot change. While non-commutable QC material results were infrequent in the present study, our data confirmed that QC materials have limited usefulness when assessing new reagent lots. Also a 1 SD standard for establishing a new QC target range after reagent lot change event was proposed. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  9. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  10. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  11. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  12. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  13. COPEPOD: The Coastal & Oceanic Plankton Ecology, Production, & Observation Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal & Oceanic Plankton Ecology, Production, & Observation Database (COPEPOD) provides NMFS scientists with quality-controlled, globally distributed...

  14. A software development for establishing optimal production lots and its application in academic and business environments

    Directory of Open Access Journals (Sweden)

    Javier Valencia Mendez

    2014-09-01

    Full Text Available The recent global economic downturn has increased an already perceived need in organizations for cost savings. To cope with such need, companies can opt for different strategies. This paper focuses on optimizing processes and, more specifically, determining the optimal lot production. To determine the optimal lot of a specific production process, a new software was developed that not only incorporates various productive and logistical elements in its calculations but also affords users a practical way to manage the large number of input parameters required to determine the optimal batch. The developed software has not only been validated by several companies, both Spanish and Mexican, who achieved significant savings, but also used as a teaching tool in universities with highly satisfactory results from the point of view of student learning. A special contribution of this work is that the developed tool can be sent to the interested reader free of charge upon request.

  15. Perception that "everything requires a lot of effort": transcultural SCL-25 item validation.

    Science.gov (United States)

    Moreau, Nicolas; Hassan, Ghayda; Rousseau, Cécile; Chenguiti, Khalid

    2009-09-01

    This brief report illustrates how the migration context can affect specific item validity of mental health measures. The SCL-25 was administered to 432 recently settled immigrants (220 Haitian and 212 Arabs). We performed descriptive analyses, as well as Infit and Outfit statistics analyses using WINSTEPS Rasch Measurement Software based on Item Response Theory. The participants' comments about the item You feel everything requires a lot of effort in the SCL-25 were also qualitatively analyzed. Results revealed that the item You feel everything requires a lot of effort is an outlier and does not adjust in an expected and valid fashion with its cluster items, as it is over-endorsed by Haitian and Arab healthy participants. Our study thus shows that, in transcultural mental health research, the cultural and migratory contexts may interact and significantly influence the meaning of some symptom items and consequently, the validity of symptom scales.

  16. An improved hierarchical A * algorithm in the optimization of parking lots

    Science.gov (United States)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    In the parking lot parking path optimization, the traditional evaluation index is the shortest distance as the best index and it does not consider the actual road conditions. Now, the introduction of a more practical evaluation index can not only simplify the hardware design of the boot system but also save the software overhead. Firstly, we establish the parking lot network graph RPCDV mathematical model and all nodes in the network is divided into two layers which were constructed using different evaluation function base on the improved hierarchical A * algorithm which improves the time optimal path search efficiency and search precision of the evaluation index. The final results show that for different sections of the program attribute parameter algorithm always faster the time to find the optimal path.

  17. A review of lot streaming in a flow shop environment with makespan criteria

    Directory of Open Access Journals (Sweden)

    Pedro Gómez-Gasquet

    2013-07-01

    Full Text Available Purpose: This paper reviews current literature and contributes a set of findings that capture the current state-of-the-art of the topic of lot streaming in a flow-shop. Design/methodology/approach: A literature review to capture, classify and summarize the main body of knowledge on lot streaming in a flow-shop with makespan criteria and, translate this into a form that is readily accessible to researchers and practitioners in the more mainstream production scheduling community. Findings and Originality/value: The existing knowledge base is somewhat fragmented. This is a relatively unexplored topic within mainstream operations management research and one which could provide rich opportunities for further exploration. Originality/value: This paper sets out to review current literature, from an advanced production scheduling perspective, and contributes a set of findings that capture the current state-of-the-art of this topic.

  18. Two parameter-tuned metaheuristic algorithms for the multi-level lot sizing and scheduling problem

    Directory of Open Access Journals (Sweden)

    S.M.T. Fatemi Ghomi

    2012-10-01

    Full Text Available This paper addresses the problem of lot sizing and scheduling problem for n-products and m-machines in flow shop environment where setups among machines are sequence-dependent and can be carried over. Many products must be produced under capacity constraints and allowing backorders. Since lot sizing and scheduling problems are well-known strongly NP-hard, much attention has been given to heuristics and metaheuristics methods. This paper presents two metaheuristics algorithms namely, Genetic Algorithm (GA and Imperialist Competitive Algorithm (ICA. Moreover, Taguchi robust design methodology is employed to calibrate the parameters of the algorithms for different size problems. In addition, the parameter-tuned algorithms are compared against a presented lower bound on randomly generated problems. At the end, comprehensive numerical examples are presented to demonstrate the effectiveness of the proposed algorithms. The results showed that the performance of both GA and ICA are very promising and ICA outperforms GA statistically.

  19. Brive-la-Gaillarde (Corrèze). Îlot Massénat

    OpenAIRE

    Ollivier, Julien

    2018-01-01

    La fouille archéologique de l’îlot Massénat a été entreprise à l’été 2016, en préalable à la construction de logements et de locaux à vocation commerciale avec parking semi-enterré. Elle a porté sur une surface d’environ 900 m2 et a duré 2 mois avec une équipe de 5 archéologues. Le site, diagnostiqué en 2004 (dir. J. Roger, Inrap), est localisé au sud du Puy Saint-Pierre, où ont été découverts tous les vestiges de l’occupation antique de Brive, encore mal caractérisée. L’îlot est par ailleurs...

  20. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  1. 3. A 40-years record of the polymetallic pollution of the Lot River system, France

    Science.gov (United States)

    Audry, S.; Schäfer, J.; Blanc, G.; Veschambre, S.; Jouanneau, J.-M.

    2003-04-01

    The Lot River system (southwest France) is known for historic Zn and Cd pollution that originates from Zn ore treatment in the small Riou-Mort watershed and affects seafood production in the Gironde Estuary. We present a sedimentary record from 2 cores taken in a dam lake downstream of the Riou-Mort watershed covering the evolution of metal inputs into the Lot River over the past 40 years (1960-2001). Depth profiles of Cd, Zn, Cu and Pb concentrations are comparable indicating common sources and transport. The constant Zn/Cd ratio (˜50) observed in the sediment cores is similar to that in SPM from the Riou-Mort watershed, indicating the dominance of point source pollution upon the geochemical background signal. Cadmium, Zn, Cu and Pb concentrations in the studied sediment cores show an important peak in 42-44 cm depth with up to 300 mg.kg-1 (Cd), 10,000 mg.kg-1 (Zn), 150 mg.kg-1 (Cu) and 930 mg.kg-1 (Pb). These concentrations are much higher than geochemical background values; For example, Cd concentrations are more than 350-fold higher than those measured in the same riverbed upstream the confluence with the Riou-Mort River. This peak coincides with the upper 137Cs peak resulting from the Chernobyl accident (1986). Therefore, this heavy metal peak is attributed to the latest accidental Cd pollution of the Lot-River in 1986. Several downward heavy metal peaks reflect varying input probably due to changes in industrial activities within the Riou-Mort watershed. Given mean sedimentation rate of about 2 cm.yr-1, the record suggests constant and much lower heavy metal concentrations since the early nineties due to restriction of industrial activities and remediation efforts in the Riou-Mort watershed. Nevertheless, Cd, Zn, Cu and Pb concentrations in the upper sediment remain high, compared to background values from reference sites in the upper Lot River system.

  2. Making Marble Tracks Can Involve Lots of Fun as Well as STEM Learning

    Science.gov (United States)

    Nagel, Bert

    2015-01-01

    Marble tracks are a very popular toy and big ones can be found in science centres in many countries. If children want to make a marble track themselves it is quite a job. It takes a long time, they can take up a lot of space and most structures are quite fragile, as the materials used can very quickly prove unfit for the task and do not last very…

  3. A DESIGN STUDY OF AN INNOVATIVE BARRIER SYSTEM FOR PERSONAL PARKING LOTS

    OpenAIRE

    BÖRKLÜ, Hüseyin; KALYON, Sadık

    2018-01-01

    The increase in the number of cars made it necessary to protectthe parking areas. This research includes a literature review aboutcommercially available barriers, which are arm barriers, rising bollards, chainbarriers, automatic and manual private barriers from the point of common andside-by-side parking lots. Their advantages and disadvantages are evaluated.After the literature review work, a design requirements list for a car parkprotector, which includes important and strong properties of ...

  4. Carbon dioxide and methane emissions from the scale model of open dairy lots.

    Science.gov (United States)

    Ding, Luyu; Cao, Wei; Shi, Zhengxiang; Li, Baoming; Wang, Chaoyuan; Zhang, Guoqiang; Kristensen, Simon

    2016-07-01

    To investigate the impacts of major factors on carbon loss via gaseous emissions, carbon dioxide (CO2) and methane (CH4) emissions from the ground of open dairy lots were tested by a scale model experiment at various air temperatures (15, 25, and 35 °C), surface velocities (0.4, 0.7, 1.0, and 1.2 m sec(-1)), and floor types (unpaved soil floor and brick-paved floor) in controlled laboratory conditions using the wind tunnel method. Generally, CO2 and CH4 emissions were significantly enhanced with the increase of air temperature and velocity (P emissions, which were also affected by air temperature and soil characteristics of the floor. Although different patterns were observed on CH4 emission from the soil and brick floors at different air temperature-velocity combinations, statistical analysis showed no significant difference in CH4 emissions from different floors (P > 0.05). For CO2, similar emissions were found from the soil and brick floors at 15 and 25 °C, whereas higher rates were detected from the brick floor at 35 °C (P emission from the scale model was exponentially related to CO2 flux, which might be helpful in CH4 emission estimation from manure management. Gaseous emissions from the open lots are largely dependent on outdoor climate, floor systems, and management practices, which are quite different from those indoors. This study assessed the effects of floor types and air velocities on CO2 and CH4 emissions from the open dairy lots at various temperatures by a wind tunnel. It provided some valuable information for decision-making and further studies on gaseous emissions from open lots.

  5. Exponential Smoothing for Multi-Product Lot-Sizing With Heijunka and Varying Demand

    OpenAIRE

    Grimaud Frédéric; Dolgui Alexandre; Korytkowski Przemyslaw

    2014-01-01

    Here we discuss a multi-product lot-sizing problem for a job shop controlled with a heijunka box. Demand is considered as a random variable with constant variation which must be absorbed somehow by the manufacturing system, either by increased inventory or by flexibility in the production. When a heijunka concept (production leveling) is used, fluctuations in customer orders are not transferred directly to the manufacturing system allowing for a smoother production and better production capac...

  6. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  7. APPLICATION OF LOT QUALITY ASSURANCE SAMPLING FOR ASSESSING DISEASE CONTROL PROGRAMMES - EXAMINATION OF SOME METHODOLOGICAL ISSUES

    OpenAIRE

    T. R. RAMESH RAO

    2011-01-01

    Lot Quality Assurance Sampling (LQAS), a statistical tool in industrial setup, has been in use since 1980 for monitoring and evaluation of programs on disease control / immunization status among children / health workers performance in health system. While conducting LQAS in the field, there are occasions, even after due care of design, there are practical and methodological issues to be addressed before it is recommended for implementation and intervention. LQAS is applied under the assumpti...

  8. Optimal Lot Sizing with Scrap and Random Breakdown Occurring in Backorder Replenishing Period

    OpenAIRE

    Ting, Chia-Kuan; Chiu, Yuan-Shyi; Chan, Chu-Chai

    2011-01-01

    This paper is concerned with determination of optimal lot size for an economic production quantity model with scrap and random breakdown occurring in backorder replenishing period. In most real-life manufacturing systems, generation of defective items and random breakdown of production equipment are inevitable. To deal with the stochastic machine failures, production planners practically calculate the mean time between failures (MTBF) and establish the robust plan accordingly, in terms of opt...

  9. Danish Colorectal Cancer Group Database.

    Science.gov (United States)

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long

  10. Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity

    Directory of Open Access Journals (Sweden)

    Sudtachat Kanchala

    2018-01-01

    Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.

  11. Study of Different Priming Treatments on Germination Traits of Soybean Seed Lots

    Directory of Open Access Journals (Sweden)

    Hossein Reza ROUHI

    2011-03-01

    Full Text Available Oilseeds are more susceptible to deterioration due to membrane disruption, high free fatty acid level in seeds and free radical production. These factors are tended to less vigorous seed. Priming treatments have been used to accelerate the germination and seedling growth in most of the crops under normal and stress conditions. For susceptible and low vigor soybean seed, this technique would be a promising method. At first, in separate experiment, effects of hydropriming for (12, 24, 36 and 48 h with control (none prime were evaluated on germination traits of soybean seed lots cv. �Sari� (include 2 drying method and 3 harvest moisture. Then, next experiment was conducted to determination the best combination of osmopriming in soybean seed lots, hence 3 osmotic potential level (-8, -10 and -12 bar at 4 time (12, 24, 36 and 48 h were compared. Analysis of variance showed that, except for seedling dry weight, the other traits include standard germination, germination rate, seedling length and vigor index were influenced by osmopriming. Hydropriming had no effect on these traits and decreased rate of germination. Finally the best combination of osmopriming were osmotic potential -12 bar at 12 hours for time, that submitted acceptable result in all conditions and recommended for soybean seed lots cv. �Sari�.

  12. Study of Different Priming Treatments on Germination Traits of Soybean Seed Lots

    Directory of Open Access Journals (Sweden)

    Hossein Reza ROUHI

    2011-03-01

    Full Text Available Oilseeds are more susceptible to deterioration due to membrane disruption, high free fatty acid level in seeds and free radical production. These factors are tended to less vigorous seed. Priming treatments have been used to accelerate the germination and seedling growth in most of the crops under normal and stress conditions. For susceptible and low vigor soybean seed, this technique would be a promising method. At first, in separate experiment, effects of hydropriming for (12, 24, 36 and 48 h with control (none prime were evaluated on germination traits of soybean seed lots cv. Sari (include 2 drying method and 3 harvest moisture. Then, next experiment was conducted to determination the best combination of osmopriming in soybean seed lots, hence 3 osmotic potential level (-8, -10 and -12 bar at 4 time (12, 24, 36 and 48 h were compared. Analysis of variance showed that, except for seedling dry weight, the other traits include standard germination, germination rate, seedling length and vigor index were influenced by osmopriming. Hydropriming had no effect on these traits and decreased rate of germination. Finally the best combination of osmopriming were osmotic potential -12 bar at 12 hours for time, that submitted acceptable result in all conditions and recommended for soybean seed lots cv. Sari.

  13. Solving lot-sizing problem with quantity discount and transportation cost

    Science.gov (United States)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  14. Three Permeable Pavements Performances for Priority Metal Pollutants and Metals associated with Deicing Chemicals from Edison Parking Lot, NJ - abstract

    Science.gov (United States)

    The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements [permeable interlocking concrete pavers (PICP), pervious concrete (PC), and porous asphalt (PA)]. Samples of each p...

  15. Three Permeable Pavements Performances for Priority Metal Pollutants and Metals Associated with Deicing Chemicals from Edison Parking Lot, NJ

    Science.gov (United States)

    The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements [permeable interlocking concrete pavers (PICP), pervious concrete (PC), and porous asphalt (PA)]. Samples of each p...

  16. Mentha spicata L. infusions as sources of antioxidant phenolic compounds: emerging reserve lots with special harvest requirements.

    Science.gov (United States)

    Rita, Ingride; Pereira, Carla; Barros, Lillian; Santos-Buelga, Celestino; Ferreira, Isabel C F R

    2016-10-12

    Mentha spicata L., commonly known as spearmint, is widely used in both fresh and dry forms, for infusion preparation or in European and Indian cuisines. Recently, with the evolution of the tea market, several novel products with added value are emerging, and the standard lots have evolved to reserve lots, with special harvest requirements that confer them with enhanced organoleptic and sensorial characteristics. The apical leaves of these batches are collected in specific conditions having, then, a different chemical profile. In the present study, standard and reserve lots of M. spicata were assessed in terms of the antioxidants present in infusions prepared from the different lots. The reserve lots presented the highest concentration in all the compounds identified in relation to the standard lots, with 326 and 188 μg mL -1 of total phenolic compounds, respectively. Both types of samples presented rosmarinic acid as the most abundant phenolic compound, at concentrations of 169 and 101 μg mL -1 for reserve and standard lots, respectively. The antioxidant activity was higher in the reserve lots which had the highest total phenolic compounds content, with EC 50 values ranging from 152 to 336 μg mL -1 . The obtained results provide scientific information that may allow the consumer to make a conscientious choice.

  17. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    Science.gov (United States)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  18. The benefits of a product-independent lexical database with formal word features

    NARCIS (Netherlands)

    Froon, Johanna; Froon, Janneke; de Jong, Franciska M.G.

    Dictionaries can be used as a basis for lexicon development for NLP applications. However, it often takes a lot of pre-processing before they are usable. In the last 5 years a product-independent database of formal word features has been developed on the basis of the Van Dale dictionaries for Dutch.

  19. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  20. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  1. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  2. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  3. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  4. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  5. When Location-Based Services Meet Databases

    Directory of Open Access Journals (Sweden)

    Dik Lun Lee

    2005-01-01

    Full Text Available As location-based services (LBSs grow to support a larger and larger user community and to provide more and more intelligent services, they must face a few fundamental challenges, including the ability to not only accept coordinates as location data but also manipulate high-level semantics of the physical environment. They must also handle a large amount of location updates and client requests and be able to scale up as their coverage increases. This paper describes some of our research in location modeling and updates and techniques for enhancing system performance by caching and batch processing. It can be observed that the challenges facing LBSs share a lot of similarity with traditional database research (i.e., data modeling, indexing, caching, and query optimization but the fact that LBSs are built into the physical space and the opportunity to exploit spatial locality in system design shed new light on LBS research.

  6. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  7. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  8. Intermodal Passenger Connectivity Database -

    Data.gov (United States)

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  9. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  10. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  11. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  12. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  13. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  14. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  15. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  16. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  17. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  18. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  19. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  20. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  1. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  2. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  3. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  4. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  5. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  6. Balkan Vegetation Database

    NARCIS (Netherlands)

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  7. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1.

  8. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  9. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  10. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  11. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  12. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  13. Database in Artificial Intelligence.

    Science.gov (United States)

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  14. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  15. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  16. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  17. Mycobacteriophage genome database.

    Science.gov (United States)

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  18. Materials data through a bibliographic database INIS

    International Nuclear Information System (INIS)

    Yamamoto, Akira; Itabashi, Keizo; Nakajima, Hidemitsu

    1992-01-01

    INIS (International Nuclear Information System) is a bibliographic database produced by collaboration of IAEA and its member countries, holding 1,500,000 records as of 1991. Although a bibliographic database does not provide numerical data itself, specific materials information can be obtained through retrieval specifying materials, properties conditions, measuring methods, etc. Also, 'data flagging' facilitates searching a record containing data. INIS has also a function of clearing house that provides original documents of scarce distribution. Hard copies of the technical reports or other non-conventional literatures are available. An efficient use of INIS database for the materials data is presented using an on-line terminal. (author)

  19. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  20. Negotiation-based Order Lot-Sizing Approach for Two-tier Supply Chain

    Science.gov (United States)

    Chao, Yuan; Lin, Hao Wen; Chen, Xili; Murata, Tomohiro

    This paper focuses on a negotiation based collaborative planning process for the determination of order lot-size over multi-period planning, and confined to a two-tier supply chain scenario. The aim is to study how negotiation based planning processes would be used to refine locally preferred ordering patterns, which would consequently affect the overall performance of the supply chain in terms of costs and service level. Minimal information exchanges in the form of mathematical models are suggested to represent the local preferences and used to support the negotiation processes.

  1. Where is my car? Examining wayfinding behavior in a parking lot

    Directory of Open Access Journals (Sweden)

    Rodrigo Mora

    2014-08-01

    Full Text Available This article examines wayfinding behavior in an extended parking lot belonging to one of the largest shopping malls in Santiago, Chile. About 500 people were followed while going to the mall and returning from it, and their trajectories were mapped and analyzed. The results indicate that inbound paths were, in average, 10% shorter that outbound paths, and that people stopped three times more frequently when leaving the mall than when accessing it. It is argued that these results are in line with previous research on the subject, which stress the importance of environmental information in shaping people`s behavior.

  2. Exponential Smoothing for Multi-Product Lot-Sizing With Heijunka and Varying Demand

    Directory of Open Access Journals (Sweden)

    Grimaud Frédéric

    2014-06-01

    Full Text Available Here we discuss a multi-product lot-sizing problem for a job shop controlled with a heijunka box. Demand is considered as a random variable with constant variation which must be absorbed somehow by the manufacturing system, either by increased inventory or by flexibility in the production. When a heijunka concept (production leveling is used, fluctuations in customer orders are not transferred directly to the manufacturing system allowing for a smoother production and better production capacity utilization. The problem rather is to determine a tradeoff between the variability in the production line capacity requirement and the inventory level.

  3. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  4. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  5. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  6. Intelligent optimization to integrate a plug-in hybrid electric vehicle smart parking lot with renewable energy resources and enhance grid characteristics

    International Nuclear Information System (INIS)

    Fazelpour, Farivar; Vafaeipour, Majid; Rahbari, Omid; Rosen, Marc A.

    2014-01-01

    Highlights: • The proposed algorithms handled design steps of an efficient parking lot of PHEVs. • Optimizations are performed with 1 h intervals to find optimum charging rates. • Multi-objective optimization is performed to find the optimum size and site of DG. • Optimal sizing of a PV–wind–diesel HRES is attained. • Charging rates are optimized intelligently during peak and off-peak times. - Abstract: Widespread application of plug-in hybrid electric vehicles (PHEVs) as an important part of smart grids requires drivers and power grid constraints to be satisfied simultaneously. We address these two challenges with the presence of renewable energy and charging rate optimization in the current paper. First optimal sizing and siting for installation of a distributed generation (DG) system is performed through the grid considering power loss minimization and voltage enhancement. Due to its benefits, the obtained optimum site is considered as the optimum location for constructing a movie theater complex equipped with a PHEV parking lot. To satisfy the obtained size of DG, an on-grid hybrid renewable energy system (HRES) is chosen. In the next set of optimizations, optimal sizing of the HRES is performed to minimize the energy cost and to find the best number of decision variables, which are the number of the system’s components. Eventually, considering demand uncertainties due to the unpredictability of the arrival and departure times of the vehicles, time-dependent charging rate optimizations of the PHEVs are performed in 1 h intervals for the 24-h of a day. All optimization problems are performed using genetic algorithms (GAs). The outcome of the proposed optimization sets can be considered as design steps of an efficient grid-friendly parking lot of PHEVs. The results indicate a reduction in real power losses and improvement in the voltage profile through the distribution line. They also show the competence of the utilized energy delivery method in

  7. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  8. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  9. Database for propagation models

    Science.gov (United States)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  10. Directory of IAEA databases. 3. ed.

    International Nuclear Information System (INIS)

    1993-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information. Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answer to the second two questions (documentation and media) is only listed when information has been made available

  11. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  12. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  13. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  14. Lot quality assurance sampling techniques in health surveys in developing countries: advantages and current constraints.

    Science.gov (United States)

    Lanata, C F; Black, R E

    1991-01-01

    Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.

  15. Why gerontology and geriatrics can teach us a lot about mentoring.

    Science.gov (United States)

    Clark, Phillip G

    2018-05-15

    Gerontology, geriatrics, and mentoring have a lot in common. The prototype of this role was Mentor, an older adult in Homer's The Odyssey, who was enlisted to look after Odysseus' son, Telemachus, while his father was away fighting the Trojan War. Portrayed as an older man, the name "mentor" literally means "a man who thinks," which is not a bad characterization generally for faculty members in gerontology! In particular, gerontological and geriatrics education can teach us a lot about the importance of mentoring and provide some critical insights into this role: (1) the importance of interprofessional leadership and modeling, (2) the application of the concept of "grand-generativity" to mentoring, (3) "it takes a community" to be effective in mentoring others, and (4) the need to tailor mentorship styles to the person and the situation. This discussion explores these topics and argues that gerontological and geriatrics educators have a particularly important role and responsibility in mentoring students, colleagues, and administrators related to the very future of our field.

  16. Demonstration Assessment of Light-Emitting Diode (LED) Parking Lot Lighting in Leavenworth, KS

    Energy Technology Data Exchange (ETDEWEB)

    Myer, Michael; Kinzey, Bruce R.; Curry, Ku' uipo

    2011-05-06

    This report describes the process and results of a demonstration of solid-state lighting (SSL) technology in a commercial parking lot lighting application, under the U.S. Department of Energy (DOE) Solid-State Lighting Technology GATEWAY Demonstration Program. The parking lot is for customers and employees of a Walmart Supercenter in Leavenworth, Kansas and this installation represents the first use of the LED Parking Lot Performance Specification developed by the DOE’s Commercial Building Energy Alliance. The application is a parking lot covering more than a half million square feet, lighted primarily by light-emitting diodes (LEDs). Metal halide wall packs were installed along the building facade. This site is new construction, so the installed baseline(s) were hypothetical designs. It was acknowledged early on that deviating from Walmart’s typical design would reduce the illuminance on the site. Walmart primarily uses 1000W pulse-start metal halide (PMH) lamps. In order to provide a comparison between both typical design and a design using conventional luminaires providing a lower illuminance, a 400W PMH design was also considered. As mentioned already, the illuminance would be reduced by shifting from the PMH system to the LED system. The Illuminating Engineering Society of North America (IES) provides recommended minimum illuminance values for parking lots. All designs exceeded the recommended illuminance values in IES RP-20, some by a wider margin than others. Energy savings from installing the LED system compared to the different PMH systems varied. Compared to the 1000W PMH system, the LED system would save 63 percent of the energy. However, this corresponds to a 68 percent reduction in illuminance as well. In comparison to the 400W PMH system, the LED system would save 44 percent of the energy and provide similar minimum illuminance values at the time of relamping. The LED system cost more than either of the PMH systems when comparing initial costs

  17. Stability measures for rolling schedules with applications to capacity expansion planning, master production scheduling, and lot sizing

    OpenAIRE

    Kimms, Alf

    1996-01-01

    This contribution discusses the measurement of (in-)stability of finite horizon production planning when done on a rolling horizon basis. As examples we review strategic capacity expansion planning, tactical master production schedulng, and operational capacitated lot sizing.

  18. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  19. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  20. Catalog of databases and reports

    International Nuclear Information System (INIS)

    Burtis, M.D.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy's (DOE's) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications

  1. Catalog of databases and reports

    Energy Technology Data Exchange (ETDEWEB)

    Burtis, M.D. [comp.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy`s (DOE`s) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications.

  2. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  3. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  4. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  5. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  6. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  7. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  8. Marine Jurisdictions Database

    National Research Council Canada - National Science Library

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  9. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  10. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  11. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  12. Household Products Database: Pesticides

    Science.gov (United States)

    ... of Products Manufacturers Ingredients About the Database FAQ Product ... control bulbs carpenter ants caterpillars crabgrass control deer dogs dogs/cats fertilizer w/insecticide fertilizer w/weed ...

  13. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  14. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  15. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  16. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  17. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  18. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  19. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...

  20. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  1. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public......INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...

  2. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    : Death is monitored using data from the Danish Civil Registry. This registry monitors the survival status of the Danish population, and the registration is virtually complete. All data in the database are audited by all participating institutions, with respect to baseline characteristics, key indicators......AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations...

  3. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  4. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  5. Tethys Acoustic Metadata Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  6. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  7. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  8. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  9. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  10. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  11. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  12. INIST: databases reorientation

    International Nuclear Information System (INIS)

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  13. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  14. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  15. Database Replication Prototype

    OpenAIRE

    Vandewall, R.

    2000-01-01

    This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....

  16. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  17. ORACLE DATABASE SECURITY

    OpenAIRE

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  18. Database computing in HEP

    International Nuclear Information System (INIS)

    Day, C.T.; Loken, S.; MacFarlane, J.F.; May, E.; Lifka, D.; Lusk, E.; Price, L.E.; Baden, A.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  19. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  20. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  1. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  2. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  3. An XML Approach of Coding a Morphological Database for Arabic Language

    Directory of Open Access Journals (Sweden)

    Mourad Gridach

    2011-01-01

    Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.

  4. A new method for assessing judgmental distributions

    NARCIS (Netherlands)

    Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.

    1995-01-01

    For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot

  5. Programming a Distributed System Using Shared Objects

    NARCIS (Netherlands)

    Tanenbaum, A.S.; Bal, H.E.; Kaashoek, M.F.

    1993-01-01

    Building the hardware for a high-performance distributed computer system is a lot easier than building its software. The authors describe a model for programming distributed systems based on abstract data types that can be replicated on all machines that need them. Read operations are done locally,

  6. Model for teaching distributed computing in a distance-based educational environment

    CSIR Research Space (South Africa)

    le Roux, P

    2010-10-01

    Full Text Available Due to the prolific growth in connectivity, the development and implementation of distributed systems receives a lot of attention. Several technologies and languages exist for the development and implementation of such distributed systems; however...

  7. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  8. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  9. Use of Occupancy Sensors in LED Parking Lot and Garage Applications: Early Experiences

    Energy Technology Data Exchange (ETDEWEB)

    Kinzey, Bruce R.; Myer, Michael; Royer, Michael P.; Sullivan, Greg P.

    2012-11-07

    Occupancy sensor systems are gaining traction as an effective technological approach to reducing energy use in exterior commercial lighting applications. Done correctly, occupancy sensors can substantially enhance the savings from an already efficient lighting system. However, this technology is confronted by several potential challenges and pitfalls that can leave a significant amount of the prospective savings on the table. This report describes anecdotal experiences from field installations of occupancy sensor controlled light-emitting diode (LED) lighting at two parking structures and two parking lots. The relative levels of success at these installations reflect a marked range of potential outcomes: from an additional 76% in energy savings to virtually no additional savings. Several issues that influenced savings were encountered in these early stage installations and are detailed in the report. Ultimately, care must be taken in the design, selection, and commissioning of a sensor-controlled lighting installation, else the only guaranteed result may be its cost.

  10. Comparison of heuristics for an economic lot scheduling problem with deliberated coproduction

    Directory of Open Access Journals (Sweden)

    Pilar I. Vidal-Carreras

    2009-12-01

    Full Text Available We built on the Economic Lot Scheduling Problem Scheduling (ELSP literature by making some modifications in order to introduce new constraints which had not been thoroughly studied with a view to simulating specific real situations. Specifically, our aim is to propose and simulate different scheduling policies for a new ELSP variant: Deliberated Coproduction. This problem comprises a product system in an ELSP environment in which we may choose if more than one product can be produced on the machine at a given time. We expressly consider the option of coproducing two products whose demand is not substitutable. In order to draw conclusions, a simulation model and its results were developed in the article by employing modified Bomberger data which include two items that could be produced simultaneously.

  11. Assessment of Seed Germination and Dormancy of Thirty Seeds Lots of

    Directory of Open Access Journals (Sweden)

    H.R Ehyaee

    2012-06-01

    Full Text Available Most seeds of medicinal plants due to ecological adaptation to environmental conditions have several types of dormancy. Hence, it's necessary to recognize ecological factors that affect dormancy and provide optimum conditions for germination in medicinal plant species. Thirty seed lots were used to estimate germination and dormancy of medicinal plants. Treatments were KNO3, (2% and scarification of seeds by sand paper, hypochlorite sodium and removing the seed coat with four replicates of 25 seeds. Maximum and minimum germination observed in H2O for Digitalis purpure 100% and Saponaria officinalis 0%. In KNO3 treatment, Portulaca oleracea had the highest germination of 91% and Hyocyamus niger had no any germinated seeds. In sand paper treatment, the Saponaria officinalis and Datura stramonium had maximum, 33% and minimum 0% germination respectively.

  12. New Mathematical Model and Algorithm for Economic Lot Scheduling Problem in Flexible Flow Shop

    Directory of Open Access Journals (Sweden)

    H. Zohali

    2018-03-01

    Full Text Available This paper addresses the lot sizing and scheduling problem for a number of products in flexible flow shop with identical parallel machines. The production stages are in series, while separated by finite intermediate buffers. The objective is to minimize the sum of setup and inventory holding costs per unit of time. The available mathematical model of this problem in the literature suffers from huge complexity in terms of size and computation. In this paper, a new mixed integer linear program is developed for delay with the huge dimentions of the problem. Also, a new meta heuristic algorithm is developed for the problem. The results of the numerical experiments represent a significant advantage of the proposed model and algorithm compared with the available models and algorithms in the literature.

  13. Tracking of Vehicle Movement on a Parking Lot Based on Video Detection

    Directory of Open Access Journals (Sweden)

    Ján HALGAŠ

    2014-06-01

    Full Text Available This article deals with topic of transport vehicles identification for dynamic and static transport based on video detection. It explains some of the technologies and approaches necessary for processing of specific image information (transport situation. The paper also describes a design of algorithm for vehicle detection on parking lot and consecutive record of trajectory into virtual environment. It shows a new approach to moving object detection (vehicles, people, and handlers on an enclosed area with emphasis on secure parking. The created application enables automatic identification of trajectory of specific objects moving within the parking area. The application was created in program language C++ with using an open source library OpenCV.

  14. Trypanosoma brucei gambiense trypanosomiasis in Terego county, northern Uganda, 1996: a lot quality assurance sampling survey.

    Science.gov (United States)

    Hutin, Yvan J F; Legros, Dominique; Owini, Vincent; Brown, Vincent; Lee, Evan; Mbulamberi, Dawson; Paquet, Christophe

    2004-04-01

    We estimated the pre-intervention prevalence of Trypanosoma brucei gambiense (Tbg) trypanosomiasis using the lot quality assurance sampling (LQAS) methods in 14 parishes of Terego County in northern Uganda. A total of 826 participants were included in the survey sample in 1996. The prevalence of laboratory confirmed Tbg trypanosomiasis adjusted for parish population sizes was 2.2% (95% confidence interval =1.1-3.2). This estimate was consistent with the 1.1% period prevalence calculated on the basis of cases identified through passive and active screening in 1996-1999. Ranking of parishes in four categories according to LQAS analysis of the 1996 survey predicted the prevalences observed during the first round of active screening in the population in 1997-1998 (P LQAS were validated by the results of the population screening, suggesting that these survey methods may be useful in the pre-intervention phase of sleeping sickness control programs.

  15. Rapid assessment of antimicrobial resistance prevalence using a Lot Quality Assurance sampling approach.

    Science.gov (United States)

    van Leth, Frank; den Heijer, Casper; Beerepoot, Mariëlle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance

    2017-04-01

    Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates from surveys of community-acquired urinary tract infection in women, by assessing operating curves, sensitivity and specificity. Sensitivity and specificity of any set of LQAS parameters was above 99% and between 79 and 90%, respectively. Operating curves showed high concordance of the LQAS classification with true AMR prevalence estimates. LQAS-based AMR surveillance is a feasible approach that provides timely and locally relevant estimates, and the necessary information to formulate and evaluate guidelines for empirical treatment.

  16. Measurements of the Cosmic Radiation Doses at Board of Aircraft of Polish Airlines LOT. Part 1

    International Nuclear Information System (INIS)

    Bilski, P.; Budzanowski, M.; Horwacik, T.; Marczewska, B.; Olko, P.

    2000-12-01

    Radiation doses received by a group of 30 pilots of the Polish Airlines LOT were investigated between July and October 2000. The measurement of the low-LET component of the cosmic radiation, lasting in average 2 months, was performed with 7 LiF:Mg,Ti and 7 L iF:Mg,Cu,P thermoluminescent detectors. The neutron component was measured with the thermoluminescent albedo cassettes. Additionally for all flights, records of altitude profiles were kept and effective doses were then calculated with the CARI-6 computer code. In total, about 560 flights were included in the calculations. The highest obtained dose was about 0.8 mSv in 2 months. Results of calculations are mostly consistent with the results of measurements. (author)

  17. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  18. [Acceptance of lot sampling: its applicability to the evaluation of the primary care services portfolio].

    Science.gov (United States)

    López-Picazo Ferrer, J

    2001-05-15

    To determine the applicability of the acceptance of lot quality assurance sampling (LQAS) in the primary care service portfolio, comparing its results with those given by classic evaluation. Compliance with the minimum technical norms (MTN) of the service of diabetic care was evaluated through the classic methodology (confidence 95%, accuracy 5%, representativeness of area, sample of 376 histories) and by LQAS (confidence 95%, power 80%, representativeness of primary care team (PCT), defining a lot by MTN and PCT, sample of 13 histories/PCT). Effort, information obtained and its operative nature were assessed. 44 PCTs from Murcia Primary Care Region. Classic methodology: compliance with MTN ranged between 91.1% (diagnosis, 95% CI, 84.2-94.0) and 30% (repercussion in viscera, 95% CI, 25.4-34.6). Objectives in three MTN were reached (diagnosis, history and EKG). LQAS: no MTN was accepted in all the PCTs, being the most accepted (42 PCT, 95.6%) and the least accepted (24 PCT, 55.6%). In 9 PCT all were accepted (20.4%), and in 2 none were accepted (4.5%). Data were analysed through Pareto charts. Classic methodology offered accurate results, but did not identify which centres were those that did not comply (general focus). LQAS was preferable for evaluating MTN and probably coverage because: 1) it uses small samples, which foment internal quality-improvement initiatives; 2) it is easy and rapid to execute; 3) it identifies the PCT and criteria where there is an opportunity for improvement (specific focus), and 4) it can be used operatively for monitoring.

  19. Storm water runoff for the Y-12 Plant and selected parking lots

    International Nuclear Information System (INIS)

    Collins, E.T.

    1996-01-01

    A comparison of storm water runoff from the Y-12 Plant and selected employee vehicle parking lots to various industry data is provided in this document. This work is an outgrowth of and part of the continuing Non-Point Source Pollution Elimination Project that was initiated in the late 1980s. This project seeks to identify area pollution sources and remediate these areas through the Resource Conservation and Recovery Act/Comprehensive Environmental Response, Compensation, and Liability Act (RCRA/CERCLA) process as managed by the Environmental Restoration Organization staff. This work is also driven by the Clean Water Act Section 402(p) which, in part, deals with establishing a National Pollutant Discharge Elimination System (NPDES) permit for storm water discharges. Storm water data from events occurring in 1988 through 1991 were analyzed in two reports: Feasibility Study for the Best Management Practices to Control Area Source Pollution Derived from Parking Lots at the DOE Y-12 Plant, September 1992, and Feasibility Study of Best Management Practices for Non-Point Source Pollution Control at the Oak Ridge Y-12 Plant, February 1993. These data consisted of analysis of outfalls discharging to upper East Fork Poplar Creek (EFPC) within the confines of the Y-12 Plant (see Appendixes D and E). These reports identified the major characteristics of concern as copper, iron, lead, manganese, mercury, nitrate (as nitrogen), zinc, biological oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), fecal coliform, and aluminum. Specific sources of these contaminants were not identifiable because flows upstream of outfalls were not sampled. In general, many of these contaminants were a concern in many outfalls. Therefore, separate sampling exercises were executed to assist in identifying (or eliminating) specific suspected sources as areas of concern

  20. Programming database tools for the casual user

    International Nuclear Information System (INIS)

    Katz, R.A; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database management system (INTERBASE) for the storage of all data associated with the control of the particle accelerator complex. This includes the static data which describes the component devices of the complex, as well as data for application program startup and data records that are used in analysis. Due to licensing restraints, it was necessary to develop tools to allow programs requiring access to a database to be unconcerned whether or not they were running on a licensed node. An in-house database server program was written, using Apollo mailbox communication protocols, allowing application programs via calls to this server to access the interbase database. Initially, the tools used by the server to actually access the database were written using the GDML C host language interface. Through the evolutionary learning process these tools have been converted to Dynamic SQL. Additionally, these tools have been extracted from the exclusive province of the database server and placed in their own library. This enables application programs to use these same tools on a licensed node without using the database server and without having to modify the application code. The syntax of the C calls remain the same