WorldWideScience

Sample records for lot distribution database

  1. 9 CFR 381.191 - Distribution of inspected products to small lot buyers.

    Science.gov (United States)

    2010-01-01

    ... small lot buyers. 381.191 Section 381.191 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE...; Exportation; or Sale of Poultry or Poultry Products § 381.191 Distribution of inspected products to small lot... small lot buyers (such as small restaurants), distributors or jobbers may remove inspected and passed...

  2. Simultaneous Optimal Placement of Distributed Generation and Electric Vehicle Parking Lots Based on Probabilistic EV Model

    OpenAIRE

    M.H. Amini; M. Parsa Moghaddam

    2013-01-01

    High penetration of distributed generations and the increasing demand for using electric vehicles provide a lot of issues for the utilities. If these two effective elements of the future power system are used in an unscheduled manner, it may lead to the loss increment in distribution networks, dramatically. In this paper, the simultaneous allocation of distributed generations (DGs) and electric vehicles (EVs) parking lots has been studied in a radial distribution network. A distribution netwo...

  3. Tradeoffs in distributed databases

    OpenAIRE

    Juntunen, R. (Risto)

    2016-01-01

    Abstract In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has b...

  4. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  5. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  6. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  7. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  8. Exploring lot-to-lot variation in spoilage bacterial communities on commercial modified atmosphere packaged beef.

    Science.gov (United States)

    Säde, Elina; Penttinen, Katri; Björkroth, Johanna; Hultman, Jenni

    2017-04-01

    Understanding the factors influencing meat bacterial communities is important as these communities are largely responsible for meat spoilage. The composition and structure of a bacterial community on a high-O 2 modified-atmosphere packaged beef product were examined after packaging, on the use-by date and two days after, to determine whether the communities at each stage were similar to those in samples taken from different production lots. Furthermore, we examined whether the taxa associated with product spoilage were distributed across production lots. Results from 16S rRNA amplicon sequencing showed that while the early samples harbored distinct bacterial communities, after 8-12 days storage at 6 °C the communities were similar to those in samples from different lots, comprising mainly of common meat spoilage bacteria Carnobacterium spp., Brochothrix spp., Leuconostoc spp. and Lactococcus spp. Interestingly, abundant operational taxonomic units associated with product spoilage were shared between the production lots, suggesting that the bacteria enable to spoil the product were constant contaminants in the production chain. A characteristic succession pattern and the distribution of common spoilage bacteria between lots suggest that both the packaging type and the initial community structure influenced the development of the spoilage bacterial community. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The design of distributed database system for HIRFL

    International Nuclear Information System (INIS)

    Wang Hong; Huang Xinmin

    2004-01-01

    This paper is focused on a kind of distributed database system used in HIRFL distributed control system. The database of this distributed database system is established by SQL Server 2000, and its application system adopts the Client/Server model. Visual C ++ is used to develop the applications, and the application uses ODBC to access the database. (authors)

  10. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  11. Assumptions of acceptance sampling and the implications for lot contamination: Escherichia coli O157 in lots of Australian manufacturing beef.

    Science.gov (United States)

    Kiermeier, Andreas; Mellor, Glen; Barlow, Robert; Jenson, Ian

    2011-04-01

    The aims of this work were to determine the distribution and concentration of Escherichia coli O157 in lots of beef destined for grinding (manufacturing beef) that failed to meet Australian requirements for export, to use these data to better understand the performance of sampling plans based on the binomial distribution, and to consider alternative approaches for evaluating sampling plans. For each of five lots from which E. coli O157 had been detected, 900 samples from the external carcass surface were tested. E. coli O157 was not detected in three lots, whereas in two lots E. coli O157 was detected in 2 and 74 samples. For lots in which E. coli O157 was not detected in the present study, the E. coli O157 level was estimated to be contaminated carton, the total number of E. coli O157 cells was estimated to be 813. In the two lots in which E. coli O157 was detected, the pathogen was detected in 1 of 12 and 2 of 12 cartons. The use of acceptance sampling plans based on a binomial distribution can provide a falsely optimistic view of the value of sampling as a control measure when applied to assessment of E. coli O157 contamination in manufacturing beef. Alternative approaches to understanding sampling plans, which do not assume homogeneous contamination throughout the lot, appear more realistic. These results indicate that despite the application of stringent sampling plans, sampling and testing approaches are inefficient for controlling microbiological quality.

  12. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  13. Aspects of the design of distributed databases

    OpenAIRE

    Burlacu Irina-Andreea

    2011-01-01

    Distributed data - data, processed by a system, can be distributed among several computers, but it is accessible from any of them. A distributed database design problem is presented that involves the development of a global model, a fragmentation, and a data allocation. The student is given a conceptual entity-relationship model for the database and a description of the transactions and a generic network environment. A stepwise solution approach to this problem is shown, based on mean value a...

  14. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  15. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  18. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  19. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  20. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  1. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  2. Web geoprocessing services on GML with a fast XML database ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  3. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  4. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  5. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  6. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  7. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  8. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  9. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  10. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  11. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  12. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    Science.gov (United States)

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  13. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  14. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  15. 21 CFR 203.38 - Sample lot or control numbers; labeling of sample units.

    Science.gov (United States)

    2010-04-01

    ... numbers; labeling of sample units. (a) Lot or control number required on drug sample labeling and sample... identifying lot or control number that will permit the tracking of the distribution of each drug sample unit... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Sample lot or control numbers; labeling of sample...

  16. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  17. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  18. Optimal pricing and lot-sizing decisions under Weibull distribution deterioration and trade credit policy

    Directory of Open Access Journals (Sweden)

    Manna S.K.

    2008-01-01

    Full Text Available In this paper, we consider the problem of simultaneous determination of retail price and lot-size (RPLS under the assumption that the supplier offers a fixed credit period to the retailer. It is assumed that the item in stock deteriorates over time at a rate that follows a two-parameter Weibull distribution and that the price-dependent demand is represented by a constant-price-elasticity function of retail price. The RPLS decision model is developed and solved analytically. Results are illustrated with the help of a base example. Computational results show that the supplier earns more profits when the credit period is greater than the replenishment cycle length. Sensitivity analysis of the solution to changes in the value of input parameters of the base example is also discussed.

  19. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  20. Column-Oriented Database Systems (Tutorial)

    NARCIS (Netherlands)

    D. Abadi; P.A. Boncz (Peter); S. Harizopoulos

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as

  1. Column-oriented database management systems

    OpenAIRE

    Možina, David

    2013-01-01

    In the following thesis I will present column-oriented database. Among other things, I will answer on a question why there is a need for a column-oriented database. In recent years there have been a lot of attention regarding a column-oriented database, even if the existence of a columnar database management systems dates back in the early seventies of the last century. I will compare both systems for a database management – a colum-oriented database system and a row-oriented database system ...

  2. Evaluation of Radon Pollution in Underground Parking Lots by Discomfort Index

    Directory of Open Access Journals (Sweden)

    AH Bu-Olayan

    2016-06-01

    Full Text Available Introduction Recent studies of public underground parking lots showed the influence of radon concentration and the probable discomfort caused by parking cars. Materials and Methods Radon concentration was measured in semi-closed public parking lots in the six governorates of Kuwait, using Durridge RAD7radon detector (USA. Results The peak radon concentration in the parking lots of Kuwait governorates was relatively higher during winter (63.15Bq/m3 compared to summer (41.73 Bq/m3. Radon in the evaluated parking lots revealed a mean annual absorbed dose (DRn: 0.02mSv/y and annual effective dose (HE: 0.06mSv/y.  Conclusion This study validated the influence of relative humidity and temperature as the major components of discomfort index (DI. The mean annual absorbed and effective dose  of radon in the evaluated parking lots were found below the permissible limits. However, high radon DRn and HE were reported when the assessment included the parking lots, the surrounding residential apartments, and office premises. Furthermore, the time-series analysis indicated significant variations of the seasonal and site-wise distribution of radon concentrations in the indoor evaluated parking lots of the six Kuwait governorates

  3. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  4. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  5. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  6. Changing the values of parameters on lot size reorder point model

    Directory of Open Access Journals (Sweden)

    Chang Hung-Chi

    2003-01-01

    Full Text Available The Just-In-Time (JIT philosophy has received a great deal of attention. Several actions such as improving quality, reducing setup cost and shortening lead time have been recognized as effective ways to achieve the underlying goal of JIT. This paper considers the partial backorders, lot size reorder point inventory system with an imperfect production process. The objective is to simultaneously optimize the lot size, reorder point, process quality, setup cost and lead time, constrained on a service level. We assume the explicit distributional form of lead time demand is unknown but the mean and standard deviation are given. The minimax distribution free approach is utilized to solve the problem and a numerical example is provided to illustrate the results. .

  7. Income distribution patterns from a complete social security database

    Science.gov (United States)

    Derzsy, N.; Néda, Z.; Santos, M. A.

    2012-11-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.

  8. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  9. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  10. Military Observer Mission Ecuador-Peru (MOMEP) Doing a Lot with a Little.

    Science.gov (United States)

    1997-06-01

    IPS), URL: <htttp://web.maxwell.syr.edu.nativew...aphy/latinam/ ecuador /borderl6.html>, accessed 10 November 1996, pp. 1-2. 蔵 "Evacuees in Loja Number...OBSERVER MISSION ECUADOR -PERU (MOMEP) DOING A LOT WITH A LITTLE BY LIEUTENANT COLONEL KEVIN M. HIGGINS United States Army DISTRIBUTION STATEMENT A...MISSION ECUADOR -PERU (MOMEP) Doing A Lot With a Little by Lieutenant Colonel Kevin M. Higgins United States Army Naval Postgraduate School Special

  11. Column-Oriented Database Systems (Tutorial)

    OpenAIRE

    Abadi, D.; Boncz, Peter; Harizopoulos, S.

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table’s columns becomes faster, at the potential expense of excessive disk-head s...

  12. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  13. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  14. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  15. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  16. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  17. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  18. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  19. La vallée du Lot en Lot-et-Garonne : inventaire topographique

    Directory of Open Access Journals (Sweden)

    Hélène Mousset

    2012-04-01

    Full Text Available La remise en navigation du Lot est à l’origine du projet d’inventaire du patrimoine de la vallée dans sa partie lot-et-garonnaise1. L’ampleur du territoire - 12 cantons riverains2 - et de la perspective historique - du Moyen Age à nos jours - imposaient d’emblée rigueur et objectifs clairs : méthode raisonnée de l’inventaire topographique pour un bilan homogène du patrimoine, fondée sur une enquête systématique du paysage bâti et du mobilier public, sans a priori. Le premier résultat est un catalogue patrimonial sous forme de bases de données3. Mais ce corpus documentaire hétérogène et touffu n’est pas une addition de monographies : il peut et doit être interrogé et exploité comme un ensemble apportant une connaissance renouvelée du territoire. Sans prétendre réaliser une synthèse de la totalité des données pour l’ensemble de la vallée4, les exemples qui vont suivre illustreront la façon dont le travail d’inventaire apporte réponses et nouvelles interrogations, concernant notamment l’occupation du sol, les paysages et l’architecture de cette partie de l’Agenais. Recherche de l’empreinte d’une époque déterminée, examen de la permanence des paysages bâtis sur la longue durée et observation des traces de mutations et flexions historiques, sont un triple niveau d’analyse attendu dans le cadre d’un inventaire sur un vaste territoire rural.The plan to reintroduce navigation on the Lot in the part of the river that flows through the Lot-et-Garonne department was at the origins of a survey of the heritage along the course of the river. The geographical scope of the survey was large (12 cantons along the river and the period covered by the heritage extends from the Middle ages up to the present day. The variety of buildings to be covered required a rigorous approach and clear objectives. The method of the topographical inventory was tailored to the production of a homogenous heritage audit

  20. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  1. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  2. A Database for Decision-Making in Training and Distributed Learning Technology

    National Research Council Canada - National Science Library

    Stouffer, Virginia

    1998-01-01

    .... A framework for incorporating data about distributed learning courseware into the existing training database was devised and a plan for a national electronic courseware redistribution network was recommended...

  3. The response time distribution in a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1996-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  4. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  5. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  6. Application of new type of distributed multimedia databases to networked electronic museum

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  7. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  8. Present and future status of distributed database for nuclear materials (Data-Free-Way)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Xu, Yibin; Kaji, Yoshiyuki; Tsukada, Takashi

    2004-01-01

    Data-Free-Way (DFW) is a distributed database for nuclear materials. DFW has been developed by three organizations such as National Institute for Materials Science (NIMS), Japan Atomic Energy Research Institute (JAERI) and Japan Nuclear Cycle Development Institute (JNC) since 1990. Each organization constructs each materials database in the strongest field and the member of three organizations can use these databases by internet. Construction of DFW, stored data, outline of knowledge data system, data manufacturing of knowledge note, activities of three organizations are described. On NIMS, nuclear reaction database for materials are explained. On JAERI, data analysis using IASCC data in JMPD is contained. Main database of JNC is experimental database of coexistence of engineering ceramics in liquid sodium at high temperature' and 'Tensile test database of irradiated 304 stainless steel' and 'Technical information database'. (S.Y.)

  9. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  10. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  11. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  12. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  13. Practical private database queries based on a quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Jakobi, Markus; Simon, Christoph; Gisin, Nicolas; Bancal, Jean-Daniel; Branciard, Cyril; Walenta, Nino; Zbinden, Hugo

    2011-01-01

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions in order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.

  14. Associations with HIV testing in Uganda: an analysis of the Lot Quality Assurance Sampling database 2003-2012.

    Science.gov (United States)

    Jeffery, Caroline; Beckworth, Colin; Hadden, Wilbur C; Ouma, Joseph; Lwanga, Stephen K; Valadez, Joseph J

    2016-01-01

    Beginning in 2003, Uganda used Lot Quality Assurance Sampling (LQAS) to assist district managers collect and use data to improve their human immunodeficiency virus (HIV)/AIDS program. Uganda's LQAS-database (2003-2012) covers up to 73 of 112 districts. Our multidistrict analysis of the LQAS data-set at 2003-2004 and 2012 examined gender variation among adults who ever tested for HIV over time, and attributes associated with testing. Conditional logistic regression matched men and women by community with seven model effect variables. HIV testing prevalence rose from 14% (men) and 12% (women) in 2003-2004 to 62% (men) and 80% (women) in 2012. In 2003-2004, knowing the benefits of testing (Odds Ratio [OR] = 6.09, 95% CI = 3.01-12.35), knowing where to get tested (OR = 2.83, 95% CI = 1.44-5.56), and secondary education (OR = 3.04, 95% CI = 1.19-7.77) were significantly associated with HIV testing. By 2012, knowing the benefits of testing (OR = 3.63, 95% CI = 2.25-5.83), where to get tested (OR = 5.15, 95% CI = 3.26-8.14), primary education (OR = 2.01, 95% CI = 1.39-2.91), being female (OR = 3.03, 95% CI = 2.53-3.62), and being married (OR = 1.81, 95% CI = 1.17-2.8) were significantly associated with HIV testing. HIV testing prevalence in Uganda has increased dramatically, more for women than men. Our results concurred with other authors that education, knowledge of HIV, and marriage (women only) are associated with testing for HIV and suggest that couples testing is more prevalent than other authors.

  15. 7 CFR 983.52 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...

  16. 7 CFR 46.20 - Lot numbers.

    Science.gov (United States)

    2010-01-01

    ... and entered on all sales tickets identifying and segregating the sales from the various shipments on hand. The lot number shall be entered on the sales tickets by the salesmen at the time of sale or by the produce dispatcher, and not by bookkeepers or others after the sales have been made. No lot number...

  17. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  18. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  19. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  20. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    Science.gov (United States)

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  1. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    Science.gov (United States)

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  2. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  3. Green Lot-Sizing

    NARCIS (Netherlands)

    M. Retel Helmrich (Mathijn Jan)

    2013-01-01

    textabstractThe lot-sizing problem concerns a manufacturer that needs to solve a production planning problem. The producer must decide at which points in time to set up a production process, and when he/she does, how much to produce. There is a trade-off between inventory costs and costs associated

  4. Site initialization, recovery, and back-up in a distributed database system

    International Nuclear Information System (INIS)

    Attar, R.; Bernstein, P.A.; Goodman, N.

    1982-01-01

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all inter-site synchronization

  5. Reactionary responses to the Bad Lot Objection.

    Science.gov (United States)

    Dellsén, Finnur

    2017-02-01

    As it is standardly conceived, Inference to the Best Explanation (IBE) is a form of ampliative inference in which one infers a hypothesis because it provides a better potential explanation of one's evidence than any other available, competing explanatory hypothesis. Bas van Fraassen famously objected to IBE thus formulated that we may have no reason to think that any of the available, competing explanatory hypotheses are true. While revisionary responses to the Bad Lot Objection concede that IBE needs to be reformulated in light of this problem, reactionary responses argue that the Bad Lot Objection is fallacious, incoherent, or misguided. This paper shows that the most influential reactionary responses to the Bad Lot Objection do nothing to undermine the original objection. This strongly suggests that proponents of IBE should focus their efforts on revisionary responses, i.e. on finding a more sophisticated characterization of IBE for which the Bad Lot Objection loses its bite. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  7. Balancing Urban Biodiversity Needs and Resident Preferences for Vacant Lot Management

    Directory of Open Access Journals (Sweden)

    Christine C. Rega-Brodsky

    2018-05-01

    Full Text Available Urban vacant lots are often a contentious feature in cities, seen as overgrown, messy eyesores that plague neighborhoods. We propose a shift in this perception to locations of urban potential, because vacant lots may serve as informal greenspaces that maximize urban biodiversity while satisfying residents’ preferences for their design and use. Our goal was to assess what kind of vacant lots are ecologically valuable by assessing their biotic contents and residents’ preferences within a variety of settings. We surveyed 150 vacant lots throughout Baltimore, Maryland for their plant and bird communities, classified the lot’s setting within the urban matrix, and surveyed residents. Remnant vacant lots had greater vegetative structure and bird species richness as compared to other lot origins, while vacant lot settings had limited effects on their contents. Residents preferred well-maintained lots with more trees and less artificial cover, support of which may increase local biodiversity in vacant lots. Collectively, we propose that vacant lots with a mixture of remnant and planted vegetation can act as sustainable urban greenspaces with the potential for some locations to enhance urban tree cover and bird habitat, while balancing the needs and preferences of city residents.

  8. An approach for access differentiation design in medical distributed applications built on databases.

    Science.gov (United States)

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  9. Optimistic protocol for partitioned distributed database systems

    International Nuclear Information System (INIS)

    Davidson, S.B.

    1982-01-01

    A protocol for transaction processing during partition failures is presented which guarantees mutual consistency between copies of data-items after repair is completed. The protocol is optimistic in that transactions are processed without restrictions during the failure; conflicts are detected at repair time using a precedence graph and are resolved by backing out transactions according to some backout strategy. The protocol is then evaluated using simulation and probabilistic modeling. In the simulation, several parameters are varied such as the number of transactions processed in a group, the type of transactions processed, the number of data-items present in the database, and the distribution of references to data-items. The simulation also uses different backout strategies. From these results we note conditions under which the protocol performs well, i.e., conditions under which the protocol backs out a small percentage of the transaction run. A probabilistic model is developed to estimate the expected number of transactions backed out using most of the above database and transaction parameters, and is shown to agree with simulation results. Suggestions are then made on how to improve the performance of the protocol. Insights gained from the simulation and probabilistic modeling are used to develop a backout strategy which takes into account individual transaction costs and attempts to minimize total backout cost. Although the problem of choosing transactions to minimize total backout cost is, in general, NP-complete, the backout strategy is efficient and produces very good results

  10. Data Mining in Distributed Database of the First Egyptian Thermal Research Reactor (ETRR-1)

    International Nuclear Information System (INIS)

    Abo Elez, R.H.; Ayad, N.M.A.; Ghuname, A.A.A.

    2006-01-01

    Distributed database (DDB)technology application systems are growing up to cover many fields an domains, and at different levels. the aim of this paper is to shade some lights on applying the new technology of distributed database on the ETRR-1 operation data logged by the data acquisition system (DACQUS)and one can extract a useful knowledge. data mining with scientific methods and specialize tools is used to support the extraction of useful knowledge from the rapidly growing volumes of data . there are many shapes and forms for data mining methods. predictive methods furnish models capable of anticipating the future behavior of quantitative or qualitative database variables. when the relationship between the dependent an independent variables is nearly liner, linear regression method is the appropriate data mining strategy. so, multiple linear regression models have been applied to a set of data samples of the ETRR-1 operation data, using least square method. the results show an accurate analysis of the multiple linear regression models as applied to the ETRR-1 operation data

  11. 7 CFR 989.104 - Lot.

    Science.gov (United States)

    2010-01-01

    ... Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE RAISINS PRODUCED FROM GRAPES GROWN IN... inspection after reconditioning (such as sorting or drying) and whose original lot identity is no longer...

  12. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  13. Monitoring of services with non-relational databases and map-reduce framework

    International Nuclear Information System (INIS)

    Babik, M; Souto, F

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  14. The response-time distribution in a real-time database with optimistic concurrency control and constant execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  15. The response-time distribution in a real-time database with optimistic concurrency control and exponential execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction takes an exponential execution

  16. Wide-area-distributed storage system for a multimedia database

    Science.gov (United States)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  17. The online database MaarjAM reveals global and ecosystemic distribution patterns in arbuscular mycorrhizal fungi (Glomeromycota).

    Science.gov (United States)

    Opik, M; Vanatoa, A; Vanatoa, E; Moora, M; Davison, J; Kalwij, J M; Reier, U; Zobel, M

    2010-10-01

    • Here, we describe a new database, MaarjAM, that summarizes publicly available Glomeromycota DNA sequence data and associated metadata. The goal of the database is to facilitate the description of distribution and richness patterns in this group of fungi. • Small subunit (SSU) rRNA gene sequences and available metadata were collated from all suitable taxonomic and ecological publications. These data have been made accessible in an open-access database (http://maarjam.botany.ut.ee). • Two hundred and eighty-two SSU rRNA gene virtual taxa (VT) were described based on a comprehensive phylogenetic analysis of all collated Glomeromycota sequences. Two-thirds of VT showed limited distribution ranges, occurring in single current or historic continents or climatic zones. Those VT that associated with a taxonomically wide range of host plants also tended to have a wide geographical distribution, and vice versa. No relationships were detected between VT richness and latitude, elevation or vascular plant richness. • The collated Glomeromycota molecular diversity data suggest limited distribution ranges in most Glomeromycota taxa and a positive relationship between the width of a taxon's geographical range and its host taxonomic range. Inconsistencies between molecular and traditional taxonomy of Glomeromycota, and shortage of data from major continents and ecosystems, are highlighted.

  18. 7 CFR 33.7 - Less than carload lot.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...

  19. Cellular Manufacturing System with Dynamic Lot Size Material Handling

    Science.gov (United States)

    Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.

    2016-02-01

    Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.

  20. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  1. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  2. PENENTUAN PRODUCTION LOT SIZES DAN TRANSFER BATCH SIZES DENGAN PENDEKATAN MULTISTAGE

    Directory of Open Access Journals (Sweden)

    Purnawan Adi W

    2012-02-01

    Full Text Available Pengendalian dan perawatan inventori merupakan suatu permasalahan yang sering dihadapi seluruh organisasi dalam berbagai sektor ekonomi. Salah satu tantangan yang yang harus dihadapi dalam pengendalian inventori adalah bagaimana menentukan ukuran lot yang optimal pada suatu sistem produksi dengan berbagai tipe. Analisis batch produksi (production lot dengan pendekatan hybrid simulasi analitik merupakan salah satu penelitian mengenai ukuran lot optimal. Penelitian tersebut menggunakan pendekatan sistem singlestage dimana tidak adanya hubungan antar proses di setiap stage atau dengan kata lain, proses yang satu independen terhadap proses yang lain. Dengan menggunakan objek penelitian yang sama dengan objek penelitian diatas, penelitian ini kemudian mengangkat permasalahan penentuan ukuran production lot dengan pendekatan multistage. Pertama, dengan menggunakan data-data yang sama dengan penelitian sebelumnya ditentukan ukuran production lot yang optimal dengan metode programa linier. Selanjutnya ukuran production lot digunakan sebegai input simulasi untuk menentukan ukuran transfer batch. Rata-rata panjang antrian dan waktu tunggu menjadi ukuran performansi yang digunakan sebagai acuan penentuan ukuran transfer batch dari beberapa alternatif ukuran yang ada. Pada penelitian ini, ukuran production lot yang dihasilkan sama besarnya dengan demand tiap periode. Sedangkan untuk ukuran transfer batch, hasil penentuan dengan menggunakan simulasi kemudian dimplementasikan ke dalam model. Hasilnya adalah adanya penurunan inventori yang terjadi sebesar 76,35% untuk produk connector dan 50,59% untuk produk box connector dari inventori yang dihasilkan dengan pendekatan singlestage. Kata kunci : multistage, production lot, transfer batch     Abstract   Inventory maintenance and inventory control is a problem that often faced by all organization in many economic sectors. One of challenges that must be faced in inventory control is how to determine the

  3. 7 CFR 983.152 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework procedure for aflatoxin. If inshell rework is selected as a remedy to meet the aflatoxin regulations of this...

  4. Resenha de: Recueil des travaux historiques de Ferdinand Lot

    Directory of Open Access Journals (Sweden)

    Eurípedes Simões de Paula

    1968-03-01

    Full Text Available RECUEIL DES TRAVAUX HISTORIQUES DE FERDINAND LOT. Tome premier. Coleção "Hautes Études Médievales et Modernes". Centre de Recherches d'Histoire et de Philologie de la IVe Section de l'École Pratique des Hautes Études. Prefácio de Ch. Samaran e biografia por I. Vildé-Lot e M. Mahn-Lot. Publicado com o concurso do Centre National de la Recherche Scientifique. Genebra, Librairie Droz e Paris, Librairie Minard. In-89, XVIII -I- 780 pp

  5. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  6. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  7. Competition under capacitated dynamic lot-sizing with capacity acquisition

    DEFF Research Database (Denmark)

    Li, Hongyan; Meissner, Joern

    2011-01-01

    Lot-sizing and capacity planning are important supply chain decisions, and competition and cooperation affect the performance of these decisions. In this paper, we look into the dynamic lot-sizing and resource competition problem of an industry consisting of multiple firms. A capacity competition...... production setup, along with inventory carrying costs. The individual production lots of each firm are limited by a constant capacity restriction, which is purchased up front for the planning horizon. The capacity can be purchased from a spot market, and the capacity acquisition cost fluctuates...

  8. 40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.

    Science.gov (United States)

    2010-07-01

    ... six (6) percent for unpaved road surfaces or eight (8) percent for unpaved parking lot surfaces as... calculating percent cover.) (iii) Vegetative Density Factor. Cut a single, representative piece of vegetation... that are not covered by any piece of the vegetation. To calculate percent vegetative density, use...

  9. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  10. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2011-01-01

    There is a lot at stake for administrators taking care of servers, since they house sensitive data like credit cards, social security numbers, medical records, and much more. In Securing SQL Server you will learn about the potential attack vectors that can be used to break into your SQL Server database, and how to protect yourself from these attacks. Written by a Microsoft SQL Server MVP, you will learn how to properly secure your database, from both internal and external threats. Best practices and specific tricks employed by the author will also be revealed. Learn expert techniques to protec

  11. Tactical Production and Lot Size Planning with Lifetime Constraints

    DEFF Research Database (Denmark)

    Raiconi, Andrea; Pahl, Julia; Gentili, Monica

    2017-01-01

    In this work, we face a variant of the capacitated lot sizing problem. This is a classical problem addressing the issue of aggregating lot sizes for a finite number of discrete periodic demands that need to be satisfied, thus setting up production resources and eventually creating inventories...

  12. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  13. An XML Approach of Coding a Morphological Database for Arabic Language

    Directory of Open Access Journals (Sweden)

    Mourad Gridach

    2011-01-01

    Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.

  14. Neuro-ophthalmology of late-onset Tay-Sachs disease (LOTS).

    Science.gov (United States)

    Rucker, J C; Shapiro, B E; Han, Y H; Kumar, A N; Garbutt, S; Keller, E L; Leigh, R J

    2004-11-23

    Late-onset Tay-Sachs disease (LOTS) is an adult-onset, autosomal recessive, progressive variant of GM2 gangliosidosis, characterized by involvement of the cerebellum and anterior horn cells. To determine the range of visual and ocular motor abnormalities in LOTS, as a prelude to evaluating the effectiveness of novel therapies. Fourteen patients with biochemically confirmed LOTS (8 men; age range 24 to 53 years; disease duration 5 to 30 years) and 10 age-matched control subjects were studied. Snellen visual acuity, contrast sensitivity, color vision, stereopsis, and visual fields were measured, and optic fundi were photographed. Horizontal and vertical eye movements (search coil) were recorded, and saccades, pursuit, vestibulo-ocular reflex (VOR), vergence, and optokinetic (OK) responses were measured. All patients showed normal visual functions and optic fundi. The main eye movement abnormality concerned saccades, which were "multistep," consisting of a series of small saccades and larger movements that showed transient decelerations. Larger saccades ended earlier and more abruptly (greater peak deceleration) in LOTS patients than in control subjects; these changes can be attributed to premature termination of the saccadic pulse. Smooth-pursuit and slow-phase OK gains were reduced, but VOR, vergence, and gaze holding were normal. Patients with late-onset Tay-Sachs disease (LOTS) show characteristic abnormalities of saccades but normal afferent visual systems. Hypometria, transient decelerations, and premature termination of saccades suggest disruption of a "latch circuit" that normally inhibits pontine omnipause neurons, permitting burst neurons to discharge until the eye movement is completed. These measurable abnormalities of saccades provide a means to evaluate the effects of novel treatments for LOTS.

  15. BGDB: a database of bivalent genes.

    Science.gov (United States)

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  16. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  17. Metal concentrations from permeable pavement parking lot in Edison, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements...

  18. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  19. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  20. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  1. Can “Cleaned and Greened” Lots Take on the Role of Public Greenspace?

    Science.gov (United States)

    Megan Heckert; Michelle Kondo

    2018-01-01

    Cities are increasingly greening vacant lots to reduce blight. Such programs could reduce inequities in urban greenspace access, but whether and how greened lots are used remains unclear. We surveyed three hundred greened lots in Philadelphia for signs of use and compared characteristics of used and nonused lots. We found physical signs of use that might be found in...

  2. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database

    International Nuclear Information System (INIS)

    Ochs, Michael; Saito, Yoshihiko; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Yui, Mikazu

    2007-03-01

    Japan Atomic Energy Agency (JAEA) has developed the sorption database (JNC-SDB) for bentonite and rocks in order to assess the retardation property of important radioactive elements in natural and engineered barriers in the H12 report. The database includes distribution coefficient (K d ) of important radionuclides. The K d values in the SDB are about 20,000 data. The SDB includes a great variety of K d and additional key information from many different literatures. Accordingly, the classification guideline and classification system were developed in order to evaluate the reliability of each K d value (Th, Pa, U, Np, Pu, Am, Cm, Cs, Ra, Se, Tc on bentonite). The reliability of 3740 K d values are evaluated and categorized. (author)

  3. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  4. MODEL JOINT ECONOMIC LOT SIZE PADA KASUS PEMASOK-PEMBELI DENGAN PERMINTAAN PROBABILISTIK

    Directory of Open Access Journals (Sweden)

    Wakhid Ahmad Jauhari

    2009-01-01

    Full Text Available In this paper we consider single vendor single buyer integrated inventory model with probabilistic demand and equal delivery lot size. The model contributes to the current literature by relaxing the deterministic demand assumption which has been used for almost all integrated inventory models. The objective is to minimize expected total costs incurred by the vendor and the buyer. We develop effective iterative procedures for finding the optimal solution. Numerical examples are used to illustrate the benefit of integration. A sensitivity analysis is performed to explore the effect of key parameters on delivery lot size, safety factor, production lot size factor and the expected total cost. The results of the numerical examples indicate that our models can achieve a significant amount of savings. Finally, we compare the results of our proposed model with a simulation model. Abstract in Bahasa Indonesia: Pada penelitian ini akan dikembangkan model gabungan pemasok-pembeli dengan permintaan probabilistik dan ukuran pengiriman sama. Pada model setiap lot pemesanan akan dikirim dalam beberapa lot pengiriman dan pemasok akan memproduksi barang dalam ukuran batch produksi yang merupakan kelipatan integer dari lot pengiriman. Dikembangkan pula suatu algoritma untuk menyelesaikan model matematis yang telah dibuat. Selain itu, pengaruh perubahan parameter terhadap perilaku model diteliti dengan analisis sensitivitas terhadap beberapa parameter kunci, seperti ukuran lot, stok pengaman dan total biaya persediaan. Pada penelitian ini juga dibuat model simulasi untuk melihat performansi model matematis pada kondisi nyata. Kata kunci: model gabungan, permintaan probabilistik, lot pengiriman, supply chain

  5. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  6. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  7. 7 CFR 993.104 - Lot.

    Science.gov (United States)

    2010-01-01

    ... means any quantity of prunes delivered by one producer or one dehydrator to a handler on which... purposes of §§ 993.50 and 993.150 means: (1) With respect to in-line inspection either (i) the aggregate... identification (e.g., brand) if in consumer packages, and offered for inspection as a lot; or (ii) prunes...

  8. Selection of seed lots of Pinus taeda L. for tissue culture

    Directory of Open Access Journals (Sweden)

    Diego Pascoal Golle

    2014-06-01

    Full Text Available The aim of this work was to identify the fungi genera associated with three Pinus taeda L. seed lots and to assess the sanitary and physiological quality of these lots for use as selection criteria for tissue culture and evaluate the in vitro establishment of explants from seminal origin in different nutritive media. It was possible to discriminate the lots on the sanitary and physiological quality, as well as to establish in vitro plants of Pinus taeda from cotyledonary nodes obtained from aseptic seed germination of a selected lot by the sanitary and physiological quality higher. The nutritive media MS, ½ MS and WPM were equally suitable for this purpose. For the sanitary analysis the fungal genera Fusarium, Penicillium and Trichoderma were those of the highest sensitivity. For the physiological evaluation were important the variables: abnormal seedlings, strong normal seedlings; length, fresh and dry weight of strong normal seedlings. The analyzes were favorable to choose lots of seeds for in vitro culture and all culture media were adequate for the establishment of this species in tissue culture.

  9. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  10. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    Science.gov (United States)

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  11. A Comparative Study on the Lot Release Systems for Vaccines as of 2016.

    Science.gov (United States)

    Fujita, Kentaro; Naito, Seishiro; Ochiai, Masaki; Konda, Toshifumi; Kato, Atsushi

    2017-09-25

    Many countries have already established their own vaccine lot release system that is designed for each country's situation: while the World Health Organization promotes for the convergence of these regulatory systems so that vaccines of assured quality are provided globally. We conducted a questionnaire-based investigation of the lot release systems for vaccines in 7 countries and 2 regions. We found that a review of the summary protocol by the National Regulatory Authorities was commonly applied for the independent lot release of vaccines, however, we also noted some diversity between countries, especially in regard to the testing policy. Some countries and regions, including Japan, regularly tested every lot of vaccines, whereas the frequency of these tests was reduced in other countries and regions as determined based on the risk assessment of these products. Test items selected for the lot release varied among the countries or regions investigated, although there was a tendency to prioritize the potency tests. An understanding of the lot release policy may contribute to improving and harmonizing the lot release system globally in the future.

  12. Cluster-sample surveys and lot quality assurance sampling to evaluate yellow fever immunisation coverage following a national campaign, Bolivia, 2007.

    Science.gov (United States)

    Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier

    2009-03-01

    To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.

  13. LOD-a-lot : A queryable dump of the LOD cloud

    NARCIS (Netherlands)

    Fernández, Javier D.; Beek, Wouter; Martínez-Prieto, Miguel A.; Arias, Mario

    2017-01-01

    LOD-a-lot democratizes access to the Linked Open Data (LOD) Cloud by serving more than 28 billion unique triples from 650, K datasets over a single self-indexed file. This corpus can be queried online with a sustainable Linked Data Fragments interface, or downloaded and consumed locally: LOD-a-lot

  14. Optimal Multi-Level Lot Sizing for Requirements Planning Systems

    OpenAIRE

    Earle Steinberg; H. Albert Napier

    1980-01-01

    The wide spread use of advanced information systems such as Material Requirements Planning (MRP) has significantly altered the practice of dependent demand inventory management. Recent research has focused on development of multi-level lot sizing heuristics for such systems. In this paper, we develop an optimal procedure for the multi-period, multi-product, multi-level lot sizing problem by modeling the system as a constrained generalized network with fixed charge arcs and side constraints. T...

  15. The benefits of a product-independent lexical database with formal word features

    NARCIS (Netherlands)

    Froon, Johanna; Froon, Janneke; de Jong, Franciska M.G.

    Dictionaries can be used as a basis for lexicon development for NLP applications. However, it often takes a lot of pre-processing before they are usable. In the last 5 years a product-independent database of formal word features has been developed on the basis of the Van Dale dictionaries for Dutch.

  16. Monitoring of services with non-relational databases and map-reduce framework

    CERN Document Server

    Babik, M; CERN. Geneva. IT Department

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their exi...

  17. Manufacturability: from design to SPC limits through "corner-lot" characterization

    Science.gov (United States)

    Hogan, Timothy J.; Baker, James C.; Wesneski, Lisa; Black, Robert S.; Rothenbury, Dave

    2005-01-01

    Texas Instruments" Digital Micro-mirror Device, is used in a wide variety of optical display applications ranging from fixed and portable projectors to high-definition television (HDTV) to digital cinema projection systems. A new DMD pixel architecture, called "FTP", was designed and qualified by Texas Instruments DLPTMTM Group in 2003 to meet increased performance objectives for brightness and contrast ratio. Coordination between design, test and fabrication groups was required to balance pixel performance requirements and manufacturing capability. "Corner Lot" designed experiments (DOE) were used to verify "fabrication space" available for the pixel design. The corner lot technique allows confirmation of manufacturability projections early in the design/qualification cycle. Through careful design and analysis of the corner-lot DOE, a balance of critical dimension (cd) "budgets" is possible so that specification and process control limits can be established that meet both customer and factory requirements. The application of corner-lot DOE is illustrated in a case history of the DMD "FTP" pixel. The process for balancing test parameter requirements with multiple critical dimension budgets is shown. MEMS/MOEMS device design and fabrication can use similar techniques to achieve agressive design-to-qualification goals.

  18. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  19. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  20. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  1. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    OpenAIRE

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined.

  2. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  3. The economic lot size and relevant costs

    NARCIS (Netherlands)

    Corbeij, M.H.; Jansen, R.A.; Grübström, R.W.; Hinterhuber, H.H.; Lundquist, J.

    1993-01-01

    In many accounting textbooks it is strongly argued that decisions should always be evaluated on relevant costs; that is variable costs and opportunity costs. Surprisingly, when it comes to Economic Order Quantities or Lot Sizes, some textbooks appear to be less straightforward. The question whether

  4. Ammonia losses and nitrogen partitioning at a southern High Plains open lot dairy

    Science.gov (United States)

    Todd, Richard W.; Cole, N. Andy; Hagevoort, G. Robert; Casey, Kenneth D.; Auvermann, Brent W.

    2015-06-01

    Animal agriculture is a significant source of ammonia (NH3). Cattle excrete most ingested nitrogen (N); most urinary N is converted to NH3, volatilized and lost to the atmosphere. Open lot dairies on the southern High Plains are a growing industry and face environmental challenges as well as reporting requirements for NH3 emissions. We quantified NH3 emissions from the open lot and wastewater lagoons of a commercial New Mexico dairy during a nine-day summer campaign. The 3500-cow dairy consisted of open lot, manure-surfaced corrals (22.5 ha area). Lactating cows comprised 80% of the herd. A flush system using recycled wastewater intermittently removed manure from feeding alleys to three lagoons (1.8 ha area). Open path lasers measured atmospheric NH3 concentration, sonic anemometers characterized turbulence, and inverse dispersion analysis was used to quantify emissions. Ammonia fluxes (15-min) averaged 56 and 37 μg m-2 s-1 at the open lot and lagoons, respectively. Ammonia emission rate averaged 1061 kg d-1 at the open lot and 59 kg d-1 at the lagoons; 95% of NH3 was emitted from the open lot. The per capita emission rate of NH3 was 304 g cow-1 d-1 from the open lot (41% of N intake) and 17 g cow-1 d-1 from lagoons (2% of N intake). Daily N input at the dairy was 2139 kg d-1, with 43, 36, 19 and 2% of the N partitioned to NH3 emission, manure/lagoons, milk, and cows, respectively.

  5. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... is available. If data is unlabeled, then it is possible to generate keywords (in case of textual data) or key-patterns, as an informative representation of the obtained clusters. The methods are applied on simple artificial data sets, as well as collections of textual and medical data. In Danish: Denne ph...

  6. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  7. New free Danish online (Q)SAR predictions database with >600,000 substances

    DEFF Research Database (Denmark)

    Wedebye, Eva Bay; Dybdahl, Marianne; Reffstrup, Trine Klein

    Since 2005 the Danish (Q)SAR Database has been freely available on the Internet. It is a tool that allows single chemical substance profiling and screenings based on predicted hazard information. The database is also included in the OECD (Q)SAR Application Toolbox which is used worldwide...... by regulators and industry. A lot of progress in (Q)SAR model development, application and documentation has been made since the publication in 2005. A new and completely rebuild online (Q)SAR predictions database was therefore published in November 2015 at http://qsar.food.dtu.dk. The number of chemicals...... in the database has been expanded from 185,000 to >600,000. As far as possible all organic single constituent substances that were pre-registered under REACH have been included in the new structure set. The new Danish (Q)SAR Database includes estimates from more than 200 (Q)SARs covering a wide range of hazardous...

  8. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  9. Five years database of landslides and floods affecting Swiss transportation networks

    Science.gov (United States)

    Voumard, Jérémie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    Switzerland is a country threatened by a lot of natural hazards. Many events occur in built environment, affecting infrastructures, buildings or transportation networks and producing occasionally expensive damages. This is the reason why large landslides are generally well studied and monitored in Switzerland to reduce the financial and human risks. However, we have noticed a lack of data on small events which have impacted roads and railways these last years. This is why we have collect all the reported natural hazard events which have affected the Swiss transportation networks since 2012 in a database. More than 800 roads and railways closures have been recorded in five years from 2012 to 2016. These event are classified into six classes: earth flow, debris flow, rockfall, flood, avalanche and others. Data come from Swiss online press articles sorted by Google Alerts. The search is based on more than thirty keywords, in three languages (Italian, French, German). After verifying that the article relates indeed an event which has affected a road or a railways track, it is studied in details. We get finally information on about sixty attributes by event about event date, event type, event localisation, meteorological conditions as well as impacts and damages on the track and human damages. From this database, many trends over the five years of data collection can be outlined: in particular, the spatial and temporal distributions of the events, as well as their consequences in term of traffic (closure duration, deviation, etc.). Even if the database is imperfect (by the way it was built and because of the short time period considered), it highlights the not negligible impact of small natural hazard events on roads and railways in Switzerland at a national level. This database helps to better understand and quantify this events, to better integrate them in risk assessment.

  10. A comparison of particle swarm optimizations for uncapacitated multilevel lot-sizin problems

    NARCIS (Netherlands)

    Han, Y.; Kaku, I.; Tang, J.; Dellaert, N.P.; Cai, J.; Li, Y.

    2010-01-01

    The multilevel lot-sizing (MLLS) problem is a key production planning problem in the material requirement planning (MRP) system. The MLLS problem deals with determining the production lot sizes of various items appearing in the product structure over a given finite planning horizon to minimize the

  11. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    Science.gov (United States)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  12. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes with distribution in Mexico

    Directory of Open Access Journals (Sweden)

    Gonzalo Pinilla-Buitrago

    2014-06-01

    Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first

  13. COAP BASED ACUTE PARKING LOT MONITORING SYSTEM USING SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    R. Aarthi

    2014-06-01

    Full Text Available Vehicle parking is the act of temporarily maneuvering a vehicle in to a certain location. To deal with parking monitoring system issue such as traffic, this paper proposes a vision of improvements in monitoring the vehicles in parking lots based on sensor networks. Most of the existing paper deals with that of the automated parking which is of cluster based and each has its own overheads like high power, less energy efficiency, incompatible size of lots, space. The novel idea in this work is usage of CoAP (Constrained Application Protocol which is recently created by IETF (draft-ietf-core-coap-18, June 28, 2013, CoRE group to develop RESTful application layer protocol for communications within embedded wireless networks. This paper deals with the enhanced CoAP protocol using multi hop flat topology, which makes the acuters feel soothe towards parking vehicles. We aim to minimize the time consumed for finding free parking lot as well as increase the energy efficiency

  14. OPERA-a human performance database under simulated emergencies of nuclear power plants

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea

    2007-01-01

    In complex systems such as the nuclear and chemical industry, the importance of human performance related problems is well recognized. Thus a lot of effort has been spent on this area, and one of the main streams for unraveling human performance related problems is the execution of HRA. Unfortunately a lack of prerequisite information has been pointed out as the most critical problem in conducting HRA. From this necessity, OPERA database that can provide operators' performance data obtained under simulated emergencies has been developed. In this study, typical operators' performance data that are available from OPERA database are briefly explained. After that, in order to ensure the appropriateness of OPERA database, operators' performance data from OPERA database are compared with those of other studies and real events. As a result, it is believed that operators' performance data of OPERA database are fairly comparable to those of other studies and real events. Therefore it is meaningful to expect that OPERA database can be used as a serviceable data source for scrutinizing human performance related problems including HRA

  15. Determination of supplier-to-supplier and lot-to-lot variability in glycation of recombinant human serum albumin expressed in Oryza sativa.

    Directory of Open Access Journals (Sweden)

    Grant E Frahm

    Full Text Available The use of different expression systems to produce the same recombinant human protein can result in expression-dependent chemical modifications (CMs leading to variability of structure, stability and immunogenicity. Of particular interest are recombinant human proteins expressed in plant-based systems, which have shown particularly high CM variability. In studies presented here, recombinant human serum albumins (rHSA produced in Oryza sativa (Asian rice (OsrHSA from a number of suppliers have been extensively characterized and compared to plasma-derived HSA (pHSA and rHSA expressed in yeast (Pichia pastoris and Saccharomyces cerevisiae. The heterogeneity of each sample was evaluated using size exclusion chromatography (SEC, reversed-phase high-performance liquid chromatography (RP-HPLC and capillary electrophoresis (CE. Modifications of the samples were identified by liquid chromatography-mass spectrometry (LC-MS. The secondary and tertiary structure of the albumin samples were assessed with far U/V circular dichroism spectropolarimetry (far U/V CD and fluorescence spectroscopy, respectively. Far U/V CD and fluorescence analyses were also used to assess thermal stability and drug binding. High molecular weight aggregates in OsrHSA samples were detected with SEC and supplier-to-supplier variability and, more critically, lot-to-lot variability in one manufactures supplied products were identified. LC-MS analysis identified a greater number of hexose-glycated arginine and lysine residues on OsrHSA compared to pHSA or rHSA expressed in yeast. This analysis also showed supplier-to-supplier and lot-to-lot variability in the degree of glycation at specific lysine and arginine residues for OsrHSA. Both the number of glycated residues and the degree of glycation correlated positively with the quantity of non-monomeric species and the chromatographic profiles of the samples. Tertiary structural changes were observed for most OsrHSA samples which

  16. 21 CFR 610.1 - Tests prior to release required for each lot.

    Science.gov (United States)

    2010-04-01

    ....1 Section 610.1 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... release required for each lot. No lot of any licensed product shall be released by the manufacturer prior... considered in determining whether or not the test results meet the test objective, except that a test result...

  17. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  18. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  19. Improving aggregate behavior in parking lots with appropriate local maneuvers

    KAUST Repository

    Rodriguez, Samuel

    2013-11-01

    In this paper we study the ingress and egress of pedestrians and vehicles in a parking lot. We show how local maneuvers executed by agents permit them to create trajectories in constrained environments, and to resolve the deadlocks between them in mixed-flow scenarios. We utilize a roadmap-based approach which allows us to map complex environments and generate heuristic local paths that are feasible for both pedestrians and vehicles. Finally, we examine the effect that some agent-behavioral parameters have on parking lot ingress and egress. © 2013 IEEE.

  20. A Heuristic Approach for Determining Lot Sizes and Schedules Using Power-of-Two Policy

    Directory of Open Access Journals (Sweden)

    Esra Ekinci

    2007-01-01

    Full Text Available We consider the problem of determining realistic and easy-to-schedule lot sizes in a multiproduct, multistage manufacturing environment. We concentrate on a specific type of production, namely, flow shop type production. The model developed consists of two parts, lot sizing problem and scheduling problem. In lot sizing problem, we employ binary integer programming and determine reorder intervals for each product using power-of-two policy. In the second part, using the results obtained of the lot sizing problem, we employ mixed integer programming to determine schedules for a multiproduct, multistage case with multiple machines in each stage. Finally, we provide a numerical example and compare the results with similar methods found in practice.

  1. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  2. MOTIVASI PEREMPUAN MEMBUKA USAHA SEKTOR INFORMAL DI DAYA TARIK WISATA TANAH LOT, TABANAN

    Directory of Open Access Journals (Sweden)

    Luh Putu Aritiana Kumala Pratiwi

    2016-08-01

    Full Text Available The development of tourism in Tanah Lot has been able to open up opportunities for local women. The businesses that mostly cultivated by women are the selling of traditional snacks of klepon, postcards, and hairpins.Women who participate should reconsider their decision to choose a dual role, both as housewives and sellers in Tanah Lot.This article analyzes the motivation of Women in opening a business in Tanah Lot area.The results showed that the motivation of women to open a business in the informal sector in Tanah Lot, namely to be able to meet the physiological needs, safety needs, affiliations, appreciation, self-actualization, and add to work experience. The factors that affect women’s motivations are internal factors such as age, educational background, family income, and marital status. While the external factors namely selling location, the condition of selling place, and having their own income.

  3. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  4. Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.

    Science.gov (United States)

    Zhao, Wei; Wang, Han

    2016-06-28

    Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.

  5. Parcels and Land Ownership, Square-mile, section-wide, property ownerhip parcel and lot-block boundaries. Includes original platted lot lines. These coverages are maintained interactively by GIS staff. Primary attributes include Parcel IDS (Control, Key, and PIN), platted lot and, Published in 2008, 1:1200 (1in=100ft) scale, Sedgwick County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Parcels and Land Ownership dataset current as of 2008. Square-mile, section-wide, property ownerhip parcel and lot-block boundaries. Includes original platted lot...

  6. The mining of toxin-like polypeptides from EST database by single residue distribution analysis.

    Science.gov (United States)

    Kozlov, Sergey; Grishin, Eugene

    2011-01-31

    Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  7. Shelf life extension for the lot AAE nozzle severance LSCs

    Science.gov (United States)

    Cook, M.

    1990-01-01

    Shelf life extension tests for the remaining lot AAE linear shaped charges for redesigned solid rocket motor nozzle aft exit cone severance were completed in the small motor conditioning and firing bay, T-11. Five linear shaped charge test articles were thermally conditioned and detonated, demonstrating proper end-to-end charge propagation. Penetration depth requirements were exceeded. Results indicate that there was no degradation in performance due to aging or the linear shaped charge curving process. It is recommended that the shelf life of the lot AAE nozzle severance linear shaped charges be extended through January 1992.

  8. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  9. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  10. Thermo-hydro-geochemical modelling of the bentonite buffer. LOT A2 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sena, Clara; Salas, Joaquin; Arcos, David (Amphos 21 Consulting S.L., Barcelona (Spain))

    2010-12-15

    The Swedish Nuclear Fuel and waste management company (SKB) is conducting a series of long term buffer material (LOT) tests at the Aespoe Hard Rock Laboratory (HRL) to test the behaviour of the bentonite buffer under conditions similar to those expected in a KBS-3 deep geological repository for high level nuclear waste (HLNW). In the present work a numerical model is developed to simulate (i) the thermo-hydraulic, (ii) transport and (iii) geochemical processes that have been observed in the LOT A2 test parcel. The LOT A2 test lasted approximately 6 years, and consists of a 4 m long vertical borehole drilled in diorite rock, from the ground of the Aespoe HRL tunnel. The borehole is composed of a central heater, maintained at 130 deg C in the lower 2 m of the borehole, a copper tube surrounding the heater and a 100 mm thick ring of pre-compacted Wyoming MX-80 bentonite around the copper tube /Karnland et al. 2009/. The numerical model developed here is a 1D axis-symmetric model that simulates the water saturation of the bentonite under a constant thermal gradient; the transport of solutes; and, the geochemical reactions observed in the bentonite blocks. Two cases have been modelled, one considering the highest temperature reached by the bentonite (at 3 m depth in the borehole, where temperatures of 130 and 85 deg C have been recorded near the copper tube and near the granitic host rock, respectively) and the other case assuming a constant temperature of 25 deg C, representing the upper part of borehole, where the bentonite has not been heated. In the LOT A2 test, the initial partially saturated bentonite becomes progressively water saturated, due to the injection of Aespoe granitic groundwater at granite - bentonite interface. The transport of solutes during the bentonite water saturation stage is believed to be controlled by water uptake from the surrounding groundwater to the wetting front and, additionally, in the case of heated bentonite, by a cyclic evaporation

  11. Thermo-hydro-geochemical modelling of the bentonite buffer. LOT A2 experiment

    International Nuclear Information System (INIS)

    Sena, Clara; Salas, Joaquin; Arcos, David

    2010-12-01

    The Swedish Nuclear Fuel and waste management company (SKB) is conducting a series of long term buffer material (LOT) tests at the Aespoe Hard Rock Laboratory (HRL) to test the behaviour of the bentonite buffer under conditions similar to those expected in a KBS-3 deep geological repository for high level nuclear waste (HLNW). In the present work a numerical model is developed to simulate (i) the thermo-hydraulic, (ii) transport and (iii) geochemical processes that have been observed in the LOT A2 test parcel. The LOT A2 test lasted approximately 6 years, and consists of a 4 m long vertical borehole drilled in diorite rock, from the ground of the Aespoe HRL tunnel. The borehole is composed of a central heater, maintained at 130 deg C in the lower 2 m of the borehole, a copper tube surrounding the heater and a 100 mm thick ring of pre-compacted Wyoming MX-80 bentonite around the copper tube /Karnland et al. 2009/. The numerical model developed here is a 1D axis-symmetric model that simulates the water saturation of the bentonite under a constant thermal gradient; the transport of solutes; and, the geochemical reactions observed in the bentonite blocks. Two cases have been modelled, one considering the highest temperature reached by the bentonite (at 3 m depth in the borehole, where temperatures of 130 and 85 deg C have been recorded near the copper tube and near the granitic host rock, respectively) and the other case assuming a constant temperature of 25 deg C, representing the upper part of borehole, where the bentonite has not been heated. In the LOT A2 test, the initial partially saturated bentonite becomes progressively water saturated, due to the injection of Aespoe granitic groundwater at granite - bentonite interface. The transport of solutes during the bentonite water saturation stage is believed to be controlled by water uptake from the surrounding groundwater to the wetting front and, additionally, in the case of heated bentonite, by a cyclic evaporation

  12. Intelligent optimization to integrate a plug-in hybrid electric vehicle smart parking lot with renewable energy resources and enhance grid characteristics

    International Nuclear Information System (INIS)

    Fazelpour, Farivar; Vafaeipour, Majid; Rahbari, Omid; Rosen, Marc A.

    2014-01-01

    Highlights: • The proposed algorithms handled design steps of an efficient parking lot of PHEVs. • Optimizations are performed with 1 h intervals to find optimum charging rates. • Multi-objective optimization is performed to find the optimum size and site of DG. • Optimal sizing of a PV–wind–diesel HRES is attained. • Charging rates are optimized intelligently during peak and off-peak times. - Abstract: Widespread application of plug-in hybrid electric vehicles (PHEVs) as an important part of smart grids requires drivers and power grid constraints to be satisfied simultaneously. We address these two challenges with the presence of renewable energy and charging rate optimization in the current paper. First optimal sizing and siting for installation of a distributed generation (DG) system is performed through the grid considering power loss minimization and voltage enhancement. Due to its benefits, the obtained optimum site is considered as the optimum location for constructing a movie theater complex equipped with a PHEV parking lot. To satisfy the obtained size of DG, an on-grid hybrid renewable energy system (HRES) is chosen. In the next set of optimizations, optimal sizing of the HRES is performed to minimize the energy cost and to find the best number of decision variables, which are the number of the system’s components. Eventually, considering demand uncertainties due to the unpredictability of the arrival and departure times of the vehicles, time-dependent charging rate optimizations of the PHEVs are performed in 1 h intervals for the 24-h of a day. All optimization problems are performed using genetic algorithms (GAs). The outcome of the proposed optimization sets can be considered as design steps of an efficient grid-friendly parking lot of PHEVs. The results indicate a reduction in real power losses and improvement in the voltage profile through the distribution line. They also show the competence of the utilized energy delivery method in

  13. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    Science.gov (United States)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  14. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  15. INTEGRATION OF PRODUCTION AND SUPPLY IN THE LEAN MANUFACTURING CONDITIONS ACCORDING TO THE LOT FOR LOT METHOD LOGIC - RESULTS OF RESEARCH

    Directory of Open Access Journals (Sweden)

    Roman Domański

    2015-12-01

    Full Text Available Background: The review of literature and observations of business practice indicate that integration of production and supply is not a well-developed area of science. The author notes that the publications on the integration most often focus on selected detailed aspects and are rather postulative in character. This is accompanied by absence of specific utilitarian solutions (tools which could be used in business practice. Methods: The research was conducted between 2009 and 2010 in a company in Wielkopolska which operates in the machining sector. The solution of the research problem is based on the author's own concept - the integration model. The cost concept of the solution was built and verified (case study on the basis of conditions of a given enterprise (industrial data. Results: Partial verifiability of results was proved in the entire set of selected material indexes (although in two cases out of three the costs differences to the disadvantage of the lot-for-lot method were small. In case of structure of studied product range, a significant conformity of results in the order of 67% was achieved for items typically characteristic for the LfL method (group AX. Conclusions: The formulated research problem and the result of its solution (only 6 material items demand a lot (orthodoxy in terms of implementation conditions. The concept of the solution has a narrow field of application in the selected organizational conditions (studied enterprise. It should be verified by independent studies of this kind at other enterprises.

  16. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  17. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  18. Deflection test evaluation of different lots of the same nickel-titanium wire commercial brand

    Directory of Open Access Journals (Sweden)

    Murilo Gaby Neves

    2016-02-01

    Full Text Available Introduction: The aim of this in vitro study was to compare the elastic properties of the load-deflection ratio of orthodontic wires of different lot numbers and the same commercial brand. Methods: A total of 40 nickel-titanium (NiTi wire segments (Morelli OrtodontiaTM - Sorocaba, SP, Brazil, 0.016-in in diameter were used. Groups were sorted according to lot numbers (lots 1, 2, 3 and 4. 28-mm length segments from the straight portion (ends of archwires were used. Deflection tests were performed in an EMIC universal testing machine with 5-N load cell at 1 mm/minute speed. Force at deactivation was recorded at 0.5, 1, 2 and 3 mm deflection. Analysis of variance (ANOVA was used to compare differences between group means. Results: When comparing the force of groups at the same deflection (3, 2 and 1 mm, during deactivation, no statistical differences were found. Conclusion: There are no changes in the elastic properties of different lots of the same commercial brand; thus, the use of different lots of the orthodontic wires used in this research does not compromise the final outcomes of the load-deflection ratio.

  19. Deflection test evaluation of different lots of the same nickel-titanium wire commercial brand.

    Science.gov (United States)

    Neves, Murilo Gaby; Lima, Fabrício Viana Pereira; Gurgel, Júlio de Araújo; Pinzan-Vercelino, Célia Regina Maio; Rezende, Fernanda Soares; Brandão, Gustavo Antônio Martins

    2016-01-01

    The aim of this in vitro study was to compare the elastic properties of the load-deflection ratio of orthodontic wires of different lot numbers and the same commercial brand. A total of 40 nickel-titanium (NiTi) wire segments (Morelli Ortodontia™--Sorocaba, SP, Brazil), 0.016-in in diameter were used. Groups were sorted according to lot numbers (lots 1, 2, 3 and 4). 28-mm length segments from the straight portion (ends) of archwires were used. Deflection tests were performed in an EMIC universal testing machine with 5-N load cell at 1 mm/minute speed. Force at deactivation was recorded at 0.5, 1, 2 and 3 mm deflection. Analysis of variance (ANOVA) was used to compare differences between group means. When comparing the force of groups at the same deflection (3, 2 and 1 mm), during deactivation, no statistical differences were found. There are no changes in the elastic properties of different lots of the same commercial brand; thus, the use of different lots of the orthodontic wires used in this research does not compromise the final outcomes of the load-deflection ratio.

  20. Mentha spicata L. infusions as sources of antioxidant phenolic compounds: emerging reserve lots with special harvest requirements.

    Science.gov (United States)

    Rita, Ingride; Pereira, Carla; Barros, Lillian; Santos-Buelga, Celestino; Ferreira, Isabel C F R

    2016-10-12

    Mentha spicata L., commonly known as spearmint, is widely used in both fresh and dry forms, for infusion preparation or in European and Indian cuisines. Recently, with the evolution of the tea market, several novel products with added value are emerging, and the standard lots have evolved to reserve lots, with special harvest requirements that confer them with enhanced organoleptic and sensorial characteristics. The apical leaves of these batches are collected in specific conditions having, then, a different chemical profile. In the present study, standard and reserve lots of M. spicata were assessed in terms of the antioxidants present in infusions prepared from the different lots. The reserve lots presented the highest concentration in all the compounds identified in relation to the standard lots, with 326 and 188 μg mL -1 of total phenolic compounds, respectively. Both types of samples presented rosmarinic acid as the most abundant phenolic compound, at concentrations of 169 and 101 μg mL -1 for reserve and standard lots, respectively. The antioxidant activity was higher in the reserve lots which had the highest total phenolic compounds content, with EC 50 values ranging from 152 to 336 μg mL -1 . The obtained results provide scientific information that may allow the consumer to make a conscientious choice.

  1. Coal-tar-based parking lot sealcoat: An unrecognized source of PAH to settled house dust

    Science.gov (United States)

    Mahler, B.J.; Van Metre, P.C.; Wilson, J.T.; Musgrove, M.; Burbank, T.L.; Ennis, T.E.; Bashara, T.J.

    2010-01-01

    Despite much speculation, the principal factors controlling concentrations of polycyclic aromatic hydrocarbons (PAH) in settled house dust (SHD) have not yet been identified. In response to recent reports that dust from pavement with coaltar-based sealcoat contains extremely high concentrations of PAH, we measured PAH in SHD from 23 apartments and in dust from their associated parking lots, one-half of which had coal-tar-based sealcoat (CT). The median concentration of total PAH (T-PAH) in dust from CT parking lots (4760 ??g/g, n = 11) was 530 times higher than that from parking lots with other pavement surface types (asphalt-based sealcoat, unsealed asphalt, concrete [median 9.0 ??g/g, n = 12]). T-PAH in SHD from apartments with CT parking lots (median 129 ??g/g) was 25 times higher than that in SHD from apartments with parking lots with other pavement surface types (median 5.1 ??g/g). Presence or absence of CT on a parking lot explained 48% of the variance in log-transformed T-PAH in SHD. Urban land-use intensity near the residence also had a significant but weaker relation to T-PAH. No other variables tested, including carpeting, frequency of vacuuming, and indoor burning, were significant. ?? 2010 American Chemical Society.

  2. Lot No. 1 of Frit 202 for DWPF cold runs

    International Nuclear Information System (INIS)

    Schumacher, R.F.

    1993-01-01

    This report was prepared at the end of 1992 and summarizes the evaluation of the first lot sample of DWPF Frit 202 from Cataphote Inc. Publication of this report was delayed until the results from the carbon analyses could be included. To avoid confusion the frit specifications presented in this report were those available at the end of 1992. The specifications were slightly modified early in 1993. The frit was received and evaluated for moisture, particle size distribution, organic-inorganic carbon and chemical composition. Moisture content and particle size distribution were determined on a representative sample at SRTC. These properties were within the DWPF specifications for Frit 202. A representative sample was submitted to Corning Engineering Laboratory Services for chemical analyses. The sample was split and two dissolutions prepared. Each dissolution was analyzed on two separate days. The results indicate that there is a high probability (>95%) that the silica content of this frit is below the specification limit of 77.0 ± 1.0 wt %. The average of the four analyzed values was 75.1 wt % with a standard deviation of 0.28 wt %. All other oxides were within the elliptical two sigma limits. Control standard frit samples were submitted and analyzed at the same time and the results were very similar to previous analyses of these materials

  3. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  4. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  5. A new method for assessing judgmental distributions

    NARCIS (Netherlands)

    Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.

    1995-01-01

    For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot

  6. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  7. Hybrid Discrete Differential Evolution Algorithm for Lot Splitting with Capacity Constraints in Flexible Job Scheduling

    Directory of Open Access Journals (Sweden)

    Xinli Xu

    2013-01-01

    Full Text Available A two-level batch chromosome coding scheme is proposed to solve the lot splitting problem with equipment capacity constraints in flexible job shop scheduling, which includes a lot splitting chromosome and a lot scheduling chromosome. To balance global search and local exploration of the differential evolution algorithm, a hybrid discrete differential evolution algorithm (HDDE is presented, in which the local strategy with dynamic random searching based on the critical path and a random mutation operator is developed. The performance of HDDE was experimented with 14 benchmark problems and the practical dye vat scheduling problem. The simulation results showed that the proposed algorithm has the strong global search capability and can effectively solve the practical lot splitting problems with equipment capacity constraints.

  8. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  9. Programming a Distributed System Using Shared Objects

    NARCIS (Netherlands)

    Tanenbaum, A.S.; Bal, H.E.; Kaashoek, M.F.

    1993-01-01

    Building the hardware for a high-performance distributed computer system is a lot easier than building its software. The authors describe a model for programming distributed systems based on abstract data types that can be replicated on all machines that need them. Read operations are done locally,

  10. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  11. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  12. Chemical analysis of DC745 Materials: DEV Lot 1 reinvestigation; barcodes P053387, P053388, and P053389

    Energy Technology Data Exchange (ETDEWEB)

    Dirmyer, Matthew R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-09

    This report serves as a follow up to our initial development lot 1 chemical analysis report (LA-UR-16-21970). The purpose of that report was to determine whether or not certain combinations of resin lots and curing agent lots resulted in chemical differences in the final material. One finding of that report suggested that pad P053389 was different from the three other pads analyzed. This report consists of chemical analysis of P053387, P053388, and a reinvestigation of P053389 all of which came from the potentially suspect combination of resin and curing agents lot. The goal of this report is to determine whether the observations relating to P053389 were isolated to that particular pad or systemic to that combination of resin and curing agent lot. The following suite of analyses were performed on the pads: Differential Scanning Calorimetry (DSC), Thermogravimetric Analysis (TGA), Fourier Transform Infrared Spectroscopy (FT-IR), and Solid State Nuclear Magnetic Resonance (NMR). The overall conclusions of the study are that pads P053387 and P053388 behave more consistently with the pads of other resin lot and curing agent lot combinations and that the chemical observations made regarding pad P053389 are isolated to that pad and not representative of an issue with that resin lot and curing agent lot combination.

  13. Statistical validation of reagent lot change in the clinical chemistry laboratory can confer insights on good clinical laboratory practice.

    Science.gov (United States)

    Cho, Min-Chul; Kim, So Young; Jeong, Tae-Dong; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2014-11-01

    Verification of new lot reagent's suitability is necessary to ensure that results for patients' samples are consistent before and after reagent lot changes. A typical procedure is to measure results of some patients' samples along with quality control (QC) materials. In this study, the results of patients' samples and QC materials in reagent lot changes were analysed. In addition, the opinion regarding QC target range adjustment along with reagent lot changes was proposed. Patients' sample and QC material results of 360 reagent lot change events involving 61 analytes and eight instrument platforms were analysed. The between-lot differences for the patients' samples (ΔP) and the QC materials (ΔQC) were tested by Mann-Whitney U tests. The size of the between-lot differences in the QC data was calculated as multiples of standard deviation (SD). The ΔP and ΔQC values only differed significantly in 7.8% of the reagent lot change events. This frequency was not affected by the assay principle or the QC material source. One SD was proposed for the cutoff for maintaining pre-existing target range after reagent lot change. While non-commutable QC material results were infrequent in the present study, our data confirmed that QC materials have limited usefulness when assessing new reagent lots. Also a 1 SD standard for establishing a new QC target range after reagent lot change event was proposed. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Nutrient concentrations in leachate and runoff from dairy cattle lots with different surface materials

    Science.gov (United States)

    Nitrogen (N) and phosphorus (P) loss from agriculture persists as a water quality issue, and outdoor cattle lots can have a high loss potential. We monitored hydrology and nutrient concentrations in leachate and runoff from dairy heifer lots constructed with three surface materials (soil, sand, bark...

  15. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  16. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  17. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible i...

  18. Safety, immunogenicity, and lot-to-lot consistency of a quadrivalent inactivated influenza vaccine in children, adolescents, and adults: A randomized, controlled, phase III trial.

    Science.gov (United States)

    Cadorna-Carlos, Josefina B; Nolan, Terry; Borja-Tabora, Charissa Fay; Santos, Jaime; Montalban, M Cecilia; de Looze, Ferdinandus J; Eizenberg, Peter; Hall, Stephen; Dupuy, Martin; Hutagalung, Yanee; Pépin, Stéphanie; Saville, Melanie

    2015-05-15

    Inactivated quadrivalent influenza vaccine (IIV4) containing two influenza A strains and one strain from each B lineage (Yamagata and Victoria) may offer broader protection against seasonal influenza than inactivated trivalent influenza vaccine (IIV3), containing a single B strain. This study examined the safety, immunogenicity, and lot consistency of an IIV4 candidate. This phase III, randomized, controlled, multicenter trial in children/adolescents (9 through 17 years) and adults (18 through 60 years) was conducted in Australia and in the Philippines in 2012. The study was double-blind for IIV4 lots and open-label for IIV4 vs IIV3. Children/adolescents were randomized 2:2:2:1 and adults 10:10:10:1 to receive one of three lots of IIV4 or licensed IIV3. Safety data were collected for up to 6 months post-vaccination. Hemagglutination inhibition and seroneutralization antibody titers were assessed pre-vaccination and 21 days post-vaccination. 1648 adults and 329 children/adolescents received IIV4, and 56 adults and 55 children/adolescents received IIV3. Solicited reactions, unsolicited adverse events, and serious adverse events were similar for IIV3 and IIV4 recipients in both age groups. Injection-site pain, headache, malaise, and myalgia were the most frequently reported solicited reactions, most of which were mild and resolved within 3 days. No vaccine-related serious adverse events or deaths were reported. Post-vaccination antibody responses, seroconversion rates, and seroprotection rates for the 3 strains common to both vaccines were comparable for IIV3 and IIV4 in both age groups. Antibody responses to IIV4 were equivalent among vaccine lots and comparable between age groups for each of the 4 strains. IIV4 met all European Medicines Agency immunogenicity criteria for adults for all 4 strains. In both age groups, IIV4 was well tolerated and caused no safety concerns, induced robust antibody responses to all 4 influenza strains, and met all EMA immunogenicity

  19. Note sur l'histoire démographique de Douelle (Lot) 1676-1914

    OpenAIRE

    Jean Fourastié

    1986-01-01

    Fourastié Jean. ? Note on the demographic history of Douelle (Lot) 1676-1914. This article summarizes the demographic data contained in a book about the village of Douelle in the department of the Lot. Both family reconstitution and genealogies have been used to ascertain the major demographic characteristics of this region during the 17th and 18th centuries : a high rate of endogamous marriages, few remarriages, declining birth rates before the Revolution, a very low number of illegitimate b...

  20. Evaluation of coverage of enriched UF6 cylinder storage lots by existing criticality accident alarms

    International Nuclear Information System (INIS)

    Lee, B.L. Jr.; Dobelbower, M.C.; Woollard, J.E.; Sutherland, P.J.; Tayloe, R.W. Jr.

    1995-03-01

    The Portsmouth Gaseous Diffusion Plant (PORTS) is leased from the US Department of Energy (DOE) by the United States Enrichment Corporation (USEC), a government corporation formed in 1993. PORTS is in transition from regulation by DOE to regulation by the Nuclear Regulatory Commission (NRC). One regulation is 10 CFR Part 76.89, which requires that criticality alarm systems be provided for the site. PORTS originally installed criticality accident alarm systems in all building for which nuclear criticality accidents were credible. Currently, however, alarm systems are not installed in the enriched uranium hexafluoride (UF 6 ) cylinder storage lots. This report analyzes and documents the extent to which enriched UF 6 cylinder storage lots at PORTS are covered by criticality detectors and alarms currently installed in adjacent buildings. Monte Carlo calculations are performed on simplified models of the cylinder storage lots and adjacent buildings. The storage lots modelled are X-745B, X-745C, X745D, X-745E, and X-745F. The criticality detectors modelled are located in building X-343, the building X-344A/X-342A complex, and portions of building X-330 (see Figures 1 and 2). These criticality detectors are those located closest to the cylinder storage lots. Results of this analysis indicate that the existing criticality detectors currently installed at PORTS are largely ineffective in detecting neutron radiation from criticality accidents in most of the cylinder storage lots at PORTS, except sometimes along portions of their peripheries

  1. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  2. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  3. Solving lot-sizing problem with quantity discount and transportation cost

    Science.gov (United States)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  4. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... and locations, using VC as a mean of indoor comfort improvement. The building-spreadsheet highlights distributions of technologies and strategies, such as the following. (Numbers in % refer to the sample of the database’s 91 buildings.) It may be concluded that Ventilative Cooling is applied in temporary......, systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  5. PSSRdb: a relational database of polymorphic simple sequence repeats extracted from prokaryotic genomes.

    Science.gov (United States)

    Kumar, Pankaj; Chaitanya, Pasumarthy S; Nagarajaram, Hampapathalu A

    2011-01-01

    PSSRdb (Polymorphic Simple Sequence Repeats database) (http://www.cdfd.org.in/PSSRdb/) is a relational database of polymorphic simple sequence repeats (PSSRs) extracted from 85 different species of prokaryotes. Simple sequence repeats (SSRs) are the tandem repeats of nucleotide motifs of the sizes 1-6 bp and are highly polymorphic. SSR mutations in and around coding regions affect transcription and translation of genes. Such changes underpin phase variations and antigenic variations seen in some bacteria. Although SSR-mediated phase variation and antigenic variations have been well-studied in some bacteria there seems a lot of other species of prokaryotes yet to be investigated for SSR mediated adaptive and other evolutionary advantages. As a part of our on-going studies on SSR polymorphism in prokaryotes we compared the genome sequences of various strains and isolates available for 85 different species of prokaryotes and extracted a number of SSRs showing length variations and created a relational database called PSSRdb. This database gives useful information such as location of PSSRs in genomes, length variation across genomes, the regions harboring PSSRs, etc. The information provided in this database is very useful for further research and analysis of SSRs in prokaryotes.

  6. Density Distributions of Cyclotrimethylenetrinitramines (RDX)

    International Nuclear Information System (INIS)

    Hoffman, D M

    2002-01-01

    As part of the US Army Foreign Comparative Testing (FCT) program the density distributions of six samples of class 1 RDX were measured using the density gradient technique. This technique was used in an attempt to distinguish between RDX crystallized by a French manufacturer (designated insensitive or IRDX) from RDX manufactured at Holston Army Ammunition Plant (HAAP), the current source of RDX for Department of Defense (DoD). Two samples from different lots of French IRDX had an average density of 1.7958 ± 0.0008 g/cc. The theoretical density of a perfect RDX crystal is 1.806 g/cc. This yields 99.43% of the theoretical maximum density (TMD). For two HAAP RDX lots the average density was 1.786 ± 0.002 g/cc, only 98.89% TMD. Several other techniques were used for preliminary characterization of one lot of French IRDX and two lot of HAAP RDX. Light scattering, SEM and polarized optical microscopy (POM) showed that SNPE and Holston RDX had the appropriate particle size distribution for Class 1 RDX. High performance liquid chromatography showed quantities of HMX in HAAP RDX. French IRDX also showed a 1.1 C higher melting point compared to HAAP RDX in the differential scanning calorimetry (DSC) consistent with no melting point depression due to the HMX contaminant. A second part of the program involved characterization of Holston RDX recrystallized using the French process. After reprocessing the average density of the Holston RDX was increased to 1.7907 g/cc. Apparently HMX in RDX can act as a nucleating agent in the French RDX recrystallization process. The French IRDX contained no HMX, which is assumed to account for its higher density and narrower density distribution. Reprocessing of RDX from Holston improved the average density compared to the original Holston RDX, but the resulting HIRDX was not as dense as the original French IRDX. Recrystallized Holston IRDX crystals were much larger (3-500 (micro)m or more) then either the original class 1 HAAP RDX or French

  7. The Lot Sizing and Scheduling of Sand Casting Operations

    NARCIS (Netherlands)

    Hans, Elias W.; van de Velde, S.L.; van de Velde, Steef

    2011-01-01

    We describe a real world case study that involves the monthly planning and scheduling of the sand-casting department in a metal foundry. The problem can be characterised as a single-level multi-item capacitated lot-sizing model with a variety of additional process-specific constraints. The main

  8. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  9. Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity

    Directory of Open Access Journals (Sweden)

    Sudtachat Kanchala

    2018-01-01

    Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.

  10. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  11. Applications of the lots computer code to laser fusion systems and other physical optics problems

    International Nuclear Information System (INIS)

    Lawrence, G.; Wolfe, P.N.

    1979-01-01

    The Laser Optical Train Simulation (LOTS) code has been developed at the Optical Sciences Center, University of Arizona under contract to Los Alamos Scientific Laboratory (LASL). LOTS is a diffraction based code designed to beam quality and energy of the laser fusion system in an end-to-end calculation

  12. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  13. Activity Recognition and Localization on a Truck Parking Lot

    NARCIS (Netherlands)

    Andersson, M.; Patino, L.; Burghouts, G.J.; Flizikowski, A.; Evans, M.; Gustafsson, D.; Petersson, H.; Schutte, K.; Ferryman, J.

    2013-01-01

    In this paper we present a set of activity recognition and localization algorithms that together assemble a large amount of information about activities on a parking lot. The aim is to detect and recognize events that may pose a threat to truck drivers and trucks. The algorithms perform zone-based

  14. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  15. Model for teaching distributed computing in a distance-based educational environment

    CSIR Research Space (South Africa)

    le Roux, P

    2010-10-01

    Full Text Available Due to the prolific growth in connectivity, the development and implementation of distributed systems receives a lot of attention. Several technologies and languages exist for the development and implementation of such distributed systems; however...

  16. Lessons Learned from resolving massive IPS database change for SPADES+

    International Nuclear Information System (INIS)

    Kim, Jin-Soo

    2016-01-01

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT_ID is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT_IDs, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT_IDs are in IPS database. Changing POINT_IDs could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database

  17. Universal scaling of the distribution of land in urban areas

    Science.gov (United States)

    Riascos, A. P.

    2017-09-01

    In this work, we explore the spatial structure of built zones and green areas in diverse western cities by analyzing the probability distribution of areas and a coefficient that characterize their respective shapes. From the analysis of diverse datasets describing land lots in urban areas, we found that the distribution of built-up areas and natural zones in cities obey inverse power laws with a similar scaling for the cities explored. On the other hand, by studying the distribution of shapes of lots in urban regions, we are able to detect global differences in the spatial structure of the distribution of land. Our findings introduce information about spatial patterns that emerge in the structure of urban settlements; this knowledge is useful for the understanding of urban growth, to improve existing models of cities, in the context of sustainability, in studies about human mobility in urban areas, among other applications.

  18. Solving a combined cutting-stock and lot-sizing problem with a column generating procedure

    DEFF Research Database (Denmark)

    Nonås, Sigrid Lise; Thorstenson, Anders

    2008-01-01

    In Nonås and Thorstenson [A combined cutting stock and lot sizing problem. European Journal of Operational Research 120(2) (2000) 327-42] a combined cutting-stock and lot-sizing problem is outlined under static and deterministic conditions. In this paper we suggest a new column generating solutio...... indicate that the procedure works well also for the extended cutting-stock problem with only a setup cost for each pattern change....

  19. Brive-la-Gaillarde (Corrèze). Îlot Massénat

    OpenAIRE

    Ollivier, Julien

    2018-01-01

    La fouille archéologique de l’îlot Massénat a été entreprise à l’été 2016, en préalable à la construction de logements et de locaux à vocation commerciale avec parking semi-enterré. Elle a porté sur une surface d’environ 900 m2 et a duré 2 mois avec une équipe de 5 archéologues. Le site, diagnostiqué en 2004 (dir. J. Roger, Inrap), est localisé au sud du Puy Saint-Pierre, où ont été découverts tous les vestiges de l’occupation antique de Brive, encore mal caractérisée. L’îlot est par ailleurs...

  20. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  1. New methodology for dynamic lot dispatching

    Science.gov (United States)

    Tai, Wei-Herng; Wang, Jiann-Kwang; Lin, Kuo-Cheng; Hsu, Yi-Chin

    1994-09-01

    This paper presents a new dynamic dispatching rule to improve delivery. The dynamic dispatching rule named `SLACK and OTD (on time delivery)' is developed for focusing on due date and target cycle time under the environment of IC manufacturing. This idea uses traditional SLACK policy to control long term due date and new OTD policy to reflect the short term stage queue time. Through the fuzzy theory, these two policies are combined as the dispatching controller to define the lot priority in the entire production line. Besides, the system would automatically update the lot priority according to the current line situation. Since the wafer dispatching used to be controlled by critical ratio that indicates the low customer satisfaction. And the overall slack time in the front end of the process is greater compared to that in the rear end of the process which reveals that the machines in the rear end are overloaded by rush orders. When SLACK and OTD are used the due date control has been gradually improved. The wafer with either a long stage queue time or urgent due date will be pushed through the overall production line instead of jammed in the front end. A demand pull system is also developed to satisfy not only due date but also the quantity of monthly demand. The SLACK and OTD rule has been implemented in Taiwan Semiconductor Manufacturing Company for eight months with beneficial results. In order to clearly monitor the SLACK and OTD policy, a method called box chart is generated to simulate the entire production system. From the box chart, we can not only monitor the result of decision policy but display the production situation on the density figure. The production cycle time and delivery situation can also be investigated.

  2. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  3. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  4. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  5. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    Science.gov (United States)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  6. Flexible interaction of plug-in electric vehicle parking lots for efficient wind integration

    International Nuclear Information System (INIS)

    Heydarian-Forushani, E.; Golshan, M.E.H.; Shafie-khah, M.

    2016-01-01

    Highlights: • Interactive incorporation of plug-in electric vehicle parking lots is investigated. • Flexible energy and reserve services are provided by electric vehicle parking lots. • Uncertain characterization of electric vehicle owners’ behavior is taken into account. • Coordinated operation of parking lots can facilitate wind power integration. - Abstract: The increasing share of uncertain wind generation has changed traditional operation scheduling of power systems. The challenges of this additional variability raise the need for an operational flexibility in providing both energy and reserve. One key solution is an effective incorporation of plug-in electric vehicles (PEVs) into the power system operation process. To this end, this paper proposes a two-stage stochastic programming market-clearing model considering the network constraints to achieve the optimal scheduling of conventional units as well as PEV parking lots (PLs) in providing both energy and reserve services. Different from existing works, the paper pays more attention to the uncertain characterization of PLs takes into account the arrival/departure time of PEVs to/from the PL, the initial state of charge (SOC) of PEVs, and their battery capacity through a set of scenarios in addition to wind generation scenarios. The results reveal that although the cost saving as a consequence of incorporating PL to the grid is below 1% of total system cost, however, flexible interactions of PL in the energy and reserve markets can promote the integration of wind power more than 13.5%.

  7. Lot quality assurance sampling for monitoring immunization programmes: cost-efficient or quick and dirty?

    Science.gov (United States)

    Sandiford, P

    1993-09-01

    In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.

  8. Domain Regeneration for Cross-Database Micro-Expression Recognition

    Science.gov (United States)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  9. The 'Thinking a Lot' Idiom of Distress and PTSD: An Examination of Their Relationship among Traumatized Cambodian Refugees Using the 'Thinking a Lot' Questionnaire

    NARCIS (Netherlands)

    Hinton, D.E.; Reis, R.; de Jong, J.

    2015-01-01

    "Thinking a lot" (TAL)—also referred to as "thinking too much"—is a key complaint in many cultural contexts, and the current article profiles this idiom of distress among Cambodian refugees. The article also proposes a general model of how TAL generates various types of distress that then cause

  10. Directory of IAEA databases. 3. ed.

    International Nuclear Information System (INIS)

    1993-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information. Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answer to the second two questions (documentation and media) is only listed when information has been made available

  11. An improved hierarchical A * algorithm in the optimization of parking lots

    Science.gov (United States)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    In the parking lot parking path optimization, the traditional evaluation index is the shortest distance as the best index and it does not consider the actual road conditions. Now, the introduction of a more practical evaluation index can not only simplify the hardware design of the boot system but also save the software overhead. Firstly, we establish the parking lot network graph RPCDV mathematical model and all nodes in the network is divided into two layers which were constructed using different evaluation function base on the improved hierarchical A * algorithm which improves the time optimal path search efficiency and search precision of the evaluation index. The final results show that for different sections of the program attribute parameter algorithm always faster the time to find the optimal path.

  12. 9 CFR 351.19 - Refusal of certification for specific lots.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Refusal of certification for specific lots. 351.19 Section 351.19 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION...

  13. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  14. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  15. License - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE License License to Use This Database Last updated : 2014/07/17 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database. The license for this datab...re Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: ... ...re . With regard to this database, you are licensed to: freely access part or whole of this database, and ac...quire data; freely redistribute part or whole of the data from this database; and freely create and distribute database

  16. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  17. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  18. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target-species distribution and abundance studies. Section 1, Part 2. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  19. Potential impacts of OCS oil and gas activities on fisheries. Volume 1. Annotated bibliography and database descriptions for target species distribution and abundance studies. Section 1, Part 1. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The purpose of the volume is to present an annotated bibliography of unpublished and grey literature related to the distribution and abundance of select species of finfish and shellfish along the coasts of the United States. The volume also includes descriptions of databases that contain information related to target species' distribution and abundance. An index is provided at the end of each section to help the reader locate studies or databases related to a particular species

  20. Immune responses to a recombinant, four-component, meningococcal serogroup B vaccine (4CMenB) in adolescents: a phase III, randomized, multicentre, lot-to-lot consistency study.

    Science.gov (United States)

    Perrett, Kirsten P; McVernon, Jodie; Richmond, Peter C; Marshall, Helen; Nissen, Michael; August, Allison; Percell, Sandra; Toneatto, Daniela; Nolan, Terry

    2015-09-22

    For decades, a broadly effective vaccine against serogroup B Neisseria meningitidis (MenB) has remained elusive. Recently, a four-component recombinant vaccine (4CMenB) has been developed and is now approved in Europe, Canada, Australia and some Latin American countries. This phase III, randomized study evaluated the lot consistency, early immune responses and the safety profile of 4CMenB in 11 to 17-year-old adolescents in Australia and Canada (NCT01423084). In total, 344 adolescents received two doses of one of 2 lots of 4CMenB, 1-month apart. Immunogenicity was assessed before, 2-weeks and 1-month following the second vaccination. Serum bactericidal activity using human complement (hSBA) was measured against three reference strains 44/76-SL, 5/99 and NZ98/254, selected to express one of the vaccine antigens; Neisseria adhesin A (NadA), factor H binding protein (fHbp) and porin A (PorA) containing outer membrane vesicle (OMV), respectively. Responses to the Neisseria heparin binding antigen (NHBA) were assessed with enzyme linked immunosorbent assay (ELISA). Local and systemic reactions were recorded for 7 days following each vaccination; unsolicited adverse events were monitored throughout the study. Immunological equivalence of the two lots of 4CMenB was established at 1-month. At baseline, ≤7% of participants had hSBA titers ≥5 to all three reference strains. Two weeks following the second dose of 4CMenB, all participants had hSBA titers ≥5 against fHbp and NadA compared with 84-96% against the PorA reference strains. At 1-month, corresponding proportions were 99%, 100% and 70-79%, respectively. Both lots were generally well tolerated and had similar adverse event profiles. Two doses of 4CMenB had an acceptable safety profile and induced a robust immune response in adolescents. Peak antibody responses were observed at 14 days following vaccination. While a substantial non-uniform antigen-dependent early decline in antibody titers was seen thereafter, a

  1. Study of Different Priming Treatments on Germination Traits of Soybean Seed Lots

    Directory of Open Access Journals (Sweden)

    Hossein Reza ROUHI

    2011-03-01

    Full Text Available Oilseeds are more susceptible to deterioration due to membrane disruption, high free fatty acid level in seeds and free radical production. These factors are tended to less vigorous seed. Priming treatments have been used to accelerate the germination and seedling growth in most of the crops under normal and stress conditions. For susceptible and low vigor soybean seed, this technique would be a promising method. At first, in separate experiment, effects of hydropriming for (12, 24, 36 and 48 h with control (none prime were evaluated on germination traits of soybean seed lots cv. �Sari� (include 2 drying method and 3 harvest moisture. Then, next experiment was conducted to determination the best combination of osmopriming in soybean seed lots, hence 3 osmotic potential level (-8, -10 and -12 bar at 4 time (12, 24, 36 and 48 h were compared. Analysis of variance showed that, except for seedling dry weight, the other traits include standard germination, germination rate, seedling length and vigor index were influenced by osmopriming. Hydropriming had no effect on these traits and decreased rate of germination. Finally the best combination of osmopriming were osmotic potential -12 bar at 12 hours for time, that submitted acceptable result in all conditions and recommended for soybean seed lots cv. �Sari�.

  2. Study of Different Priming Treatments on Germination Traits of Soybean Seed Lots

    Directory of Open Access Journals (Sweden)

    Hossein Reza ROUHI

    2011-03-01

    Full Text Available Oilseeds are more susceptible to deterioration due to membrane disruption, high free fatty acid level in seeds and free radical production. These factors are tended to less vigorous seed. Priming treatments have been used to accelerate the germination and seedling growth in most of the crops under normal and stress conditions. For susceptible and low vigor soybean seed, this technique would be a promising method. At first, in separate experiment, effects of hydropriming for (12, 24, 36 and 48 h with control (none prime were evaluated on germination traits of soybean seed lots cv. Sari (include 2 drying method and 3 harvest moisture. Then, next experiment was conducted to determination the best combination of osmopriming in soybean seed lots, hence 3 osmotic potential level (-8, -10 and -12 bar at 4 time (12, 24, 36 and 48 h were compared. Analysis of variance showed that, except for seedling dry weight, the other traits include standard germination, germination rate, seedling length and vigor index were influenced by osmopriming. Hydropriming had no effect on these traits and decreased rate of germination. Finally the best combination of osmopriming were osmotic potential -12 bar at 12 hours for time, that submitted acceptable result in all conditions and recommended for soybean seed lots cv. Sari.

  3. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  4. Lessons Learned from resolving massive IPS database change for SPADES+

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin-Soo [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT{sub I}D is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT{sub I}Ds, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT{sub I}Ds are in IPS database. Changing POINT{sub I}Ds could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database.

  5. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  6. License - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG License License to Use This Database Last updated : 2013/08/07 You may use this database...se terms regarding the use of this database and the requirements you must follow in using this database. The license for this databas... Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follows: Ric...Japan is found here . With regard to this database, you are licensed to: freely access part or whole of this database..., and acquire data; freely redistribute part or whole of the data from this database; and freely create and distribute datab

  7. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    Science.gov (United States)

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  8. A Distributed Database System for Developing Ontological and Lexical Resources in Harmony

    NARCIS (Netherlands)

    Horák, A.; Vossen, P.T.J.M.; Rambousek, A.; Gelbukh, A.

    2010-01-01

    In this article, we present the basic ideas of creating a new information-rich lexical database of Dutch, called Cornetto, that is interconnected with corresponding English synsets and a formal ontology. The Cornetto database is based on two existing electronic dictionaries - the Referentie Bestand

  9. Lot quality assurance sampling for screening communities hyperendemic for Schistosoma mansoni.

    Science.gov (United States)

    Rabarijaona, L P; Boisier, P; Ravaoalimalala, V E; Jeanne, I; Roux, J F; Jutand, M A; Salamon, R

    2003-04-01

    Lot quality assurance sampling (LQAS) was evaluated for rapid low cost identification of communities where Schistosoma mansoni infection was hyperendemic in southern Madagascar. In the study area, S. mansoni infection shows very focused and heterogeneous distribution requiring multifariousness of local surveys. One sampling plan was tested in the field with schoolchildren and several others were simulated in the laboratory. Randomization and stool specimen collection were performed by voluntary teachers under direct supervision of the study staff and no significant problem occurred. As expected from Receiver Operating Characteristic (ROC) curves, all sampling plans allowed correct identification of hyperendemic communities and of most of the hypoendemic ones. Frequent misclassifications occurred for communities with intermediate prevalence and the cheapest plans had very low specificity. The study confirmed that LQAS would be a valuable tool for large scale screening in a country with scarce financial and staff resources. Involving teachers, appeared to be quite feasible and should not lower the reliability of surveys. We recommend that the national schistosomiasis control programme systematically uses LQAS for identification of communities, provided that sample sizes are adapted to the specific epidemiological patterns of S. mansoni infection in the main regions.

  10. Multi-layer distributed storage of LHD plasma diagnostic database

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Kojima, Mamoru; Ohsuna, Masaki; Nonomura, Miki; Imazu, Setsuo; Nagayama, Yoshio

    2006-01-01

    At the end of LHD experimental campaign in 2003, the amount of whole plasma diagnostics raw data had reached 3.16 GB in a long-pulse experiment. This is a new world record in fusion plasma experiments, far beyond the previous value of 1.5 GB/shot. The total size of the LHD diagnostic data is about 21.6 TB for the whole six years of experiments, and it continues to grow at an increasing rate. The LHD diagnostic database and storage system, i.e. the LABCOM system, has a completely distributed architecture to be sufficiently flexible and easily expandable to maintain integrity of the total amount of data. It has three categories of the storage layer: OODBMS volumes in data acquisition servers, RAID servers, and mass storage systems, such as MO jukeboxes and DVD-R changers. These are equally accessible through the network. By data migration between them, they can be considered a virtual OODB extension area. Their data contents have been listed in a 'facilitator' PostgreSQL RDBMS, which contains about 6.2 million entries, and informs the optimized priority to clients requesting data. Using the 'glib' compression for all of the binary data and applying the three-tier application model for the OODB data transfer/retrieval, an optimized OODB read-out rate of 1.7 MB/s and effective client access speed of 3-25 MB/s have been achieved. As a result, the LABCOM data system has succeeded in combination of the use of RDBMS, OODBMS, RAID, and MSS to enable a virtual and always expandable storage volume, simultaneously with rapid data access. (author)

  11. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  12. USBombus, a database of contemporary survey data for North American Bumble Bees (Hymenoptera, Apidae, Bombus) distributed in the United States.

    Science.gov (United States)

    Koch, Jonathan B; Lozier, Jeffrey; Strange, James P; Ikerd, Harold; Griswold, Terry; Cordes, Nils; Solter, Leellen; Stewart, Isaac; Cameron, Sydney A

    2015-01-01

    Bumble bees (Hymenoptera: Apidae, Bombus) are pollinators of wild and economically important flowering plants. However, at least four bumble bee species have declined significantly in population abundance and geographic range relative to historic estimates, and one species is possibly extinct. While a wealth of historic data is now available for many of the North American species found to be in decline in online databases, systematic survey data of stable species is still not publically available. The availability of contemporary survey data is critically important for the future monitoring of wild bumble bee populations. Without such data, the ability to ascertain the conservation status of bumble bees in the United States will remain challenging. This paper describes USBombus, a large database that represents the outcomes of one of the largest standardized surveys of bumble bee pollinators (Hymenoptera, Apidae, Bombus) globally. The motivation to collect live bumble bees across the United States was to examine the decline and conservation status of Bombus affinis, B. occidentalis, B. pensylvanicus, and B. terricola. Prior to our national survey of bumble bees in the United States from 2007 to 2010, there have only been regional accounts of bumble bee abundance and richness. In addition to surveying declining bumble bees, we also collected and documented a diversity of co-occuring bumble bees. However we have not yet completely reported their distribution and diversity onto a public online platform. Now, for the first time, we report the geographic distribution of bumble bees reported to be in decline (Cameron et al. 2011), as well as bumble bees that appeared to be stable on a large geographic scale in the United States (not in decline). In this database we report a total of 17,930 adult occurrence records across 397 locations and 39 species of Bombus detected in our national survey. We summarize their abundance and distribution across the United States and

  13. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  14. Carbon dioxide and methane emissions from the scale model of open dairy lots.

    Science.gov (United States)

    Ding, Luyu; Cao, Wei; Shi, Zhengxiang; Li, Baoming; Wang, Chaoyuan; Zhang, Guoqiang; Kristensen, Simon

    2016-07-01

    To investigate the impacts of major factors on carbon loss via gaseous emissions, carbon dioxide (CO2) and methane (CH4) emissions from the ground of open dairy lots were tested by a scale model experiment at various air temperatures (15, 25, and 35 °C), surface velocities (0.4, 0.7, 1.0, and 1.2 m sec(-1)), and floor types (unpaved soil floor and brick-paved floor) in controlled laboratory conditions using the wind tunnel method. Generally, CO2 and CH4 emissions were significantly enhanced with the increase of air temperature and velocity (P emissions, which were also affected by air temperature and soil characteristics of the floor. Although different patterns were observed on CH4 emission from the soil and brick floors at different air temperature-velocity combinations, statistical analysis showed no significant difference in CH4 emissions from different floors (P > 0.05). For CO2, similar emissions were found from the soil and brick floors at 15 and 25 °C, whereas higher rates were detected from the brick floor at 35 °C (P emission from the scale model was exponentially related to CO2 flux, which might be helpful in CH4 emission estimation from manure management. Gaseous emissions from the open lots are largely dependent on outdoor climate, floor systems, and management practices, which are quite different from those indoors. This study assessed the effects of floor types and air velocities on CO2 and CH4 emissions from the open dairy lots at various temperatures by a wind tunnel. It provided some valuable information for decision-making and further studies on gaseous emissions from open lots.

  15. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  16. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  17. New generalized functions and multiplication of distributions

    International Nuclear Information System (INIS)

    Colombeau, J.F.

    1984-01-01

    Since its conception, Quantum Field Theory is based on 'heuristic' computations (in particular products of distributions) that, despite lots of effort, remained meaningless from a mathematical viewpoint. In this book the author presents a new mathematical theory giving a rigorous mathematical sense to these heuristic computations and, from a mathematical viewpoint, to all products of distributions. This new mathematical theory is a new theory of Generalized Functions defined on any open subset Ω of Rsup(n), which are much more general than the distributions on Ω. (Auth.)

  18. A database of worldwide glacier thickness observations

    DEFF Research Database (Denmark)

    Gärtner-Roer, I.; Naegeli, K.; Huss, M.

    2014-01-01

    One of the grand challenges in glacier research is to assess the total ice volume and its global distribution. Over the past few decades the compilation of a world glacier inventory has been well-advanced both in institutional set-up and in spatial coverage. The inventory is restricted to glacier...... the different estimation approaches. This initial database of glacier and ice caps thickness will hopefully be further enlarged and intensively used for a better understanding of the global glacier ice volume and its distribution....... surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, we present a standardized database of glacier thickness observations compiled by an extensive literature...... review and from airborne data extracted from NASA's Operation IceBridge. This database contains ice thickness observations from roughly 1100 glaciers and ice caps including 550 glacier-wide estimates and 750,000 point observations. A comparison of these observational ice thicknesses with results from...

  19. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  20. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  1. LOT Project long term test of buffer material at the Aespoe HRL

    International Nuclear Information System (INIS)

    Karnland, O.; Olsson, S.; Dueck, A.; Birgersson, M.; Nilsson, U.; Hernan-Haakansson, T.; Pedersen, K.; Eriksson, S.; Eriksen, T.; Eriksson, S.; Rosborg, B.; Muurinen, A.; Rousset, D.; Mosser-Ruck, R.; Cathelineau, M.; Villieras, F.; Pelletier, M.; Kaufold, S.; Dohrmann, R.; Fernandez, R.; Maeder, U.; Koroleva, M.

    2010-01-01

    Document available in extended abstract form only. Bentonite clay has been proposed as buffer material in several concepts for HLW repositories. The decaying spent fuel in the HLW canisters will increase temperature of the bentonite buffer. A number of laboratory test series, made by different research groups, have resulted in various bentonite alteration models. According to these models no significant alteration of the buffer is expected to take place at the prevailing physico-chemical conditions in the proposed Swedish KBS-3 repository, neither during, nor after water saturation. The ongoing LOT test series is focused on quantifying the mineralogical alteration in the buffer in a repository like environment at the Aespoe HRL. Further, buffer related processes concerning bacterial survival/activity, cation transport, and copper corrosion are studied. In total, the LOT test series includes seven test parcels, of which three are exposed to standard KBS-3 conditions and four test parcels are exposed to adverse conditions. Each test parcel contains a central Cu-tube surrounded by bentonite cylinder rings with a diameter of 30 cm, additional test material (Cu coupons, 60 Co tracers, bacteria etc) and instruments. Electrical heaters were place within the copper tube in order to simulate effect of decaying power from the spent fuel. The entire test parcels were released from the rock after the field exposure by overlapping boring and the bentonite material was analyzed with respect to: - physical properties (water content, density, swelling pressure, hydraulic conductivity, rheology); - mineralogical alteration in the bentonite; - distribution of added substances (e.g diffusional transport of 60 Co); - copper corrosion; - bacterial survival/activity. Two one year tests were started in 1996 and terminated in 1998. The results from tests and analyses are presented in SKB TR-00-22. The remaining four test parcels were installed during the fall 1999 plus one additional one

  2. Event data collection and database development during plant shutdown and low power operations at domestic and foreign reactors

    International Nuclear Information System (INIS)

    Kim, T. Y.; Park, J. H.; Han, S. J.; Im, H. K.; Jang, S. C.

    2003-01-01

    To reduce conservatism and to obtain completeness for Low Power and ShutDown(LPSD) PSA of nuclear plants, total of 625 event data have collected during shutdown and low power operations which have occurred during about 30 years at nuclear power plants of USA and European countries including 2 domestic events. To utilize efficiently these event data, a database program which is called LEDB (Low power and shutdown Event Database) was developed and all the event data collected were inserted in that program. By reviewing and analyzing these event data various way, a lot of very useful insights and ideas for preventing similar events from reoccurrence in domestic nuclear power plants can be obtained

  3. When Location-Based Services Meet Databases

    Directory of Open Access Journals (Sweden)

    Dik Lun Lee

    2005-01-01

    Full Text Available As location-based services (LBSs grow to support a larger and larger user community and to provide more and more intelligent services, they must face a few fundamental challenges, including the ability to not only accept coordinates as location data but also manipulate high-level semantics of the physical environment. They must also handle a large amount of location updates and client requests and be able to scale up as their coverage increases. This paper describes some of our research in location modeling and updates and techniques for enhancing system performance by caching and batch processing. It can be observed that the challenges facing LBSs share a lot of similarity with traditional database research (i.e., data modeling, indexing, caching, and query optimization but the fact that LBSs are built into the physical space and the opportunity to exploit spatial locality in system design shed new light on LBS research.

  4. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  5. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  6. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  7. Parking Lot Runoff Quality and Treatment Efficiency of a Stormwater-Filtration Device, Madison, Wisconsin, 2005-07

    Science.gov (United States)

    Horwatich, Judy A.; Bannerman, Roger T.

    2010-01-01

    To evaluate the treatment efficiency of a stormwater-filtration device (SFD) for potential use at Wisconsin Department of Transportation (WisDOT) park-and-ride facilities, a SFD was installed at an employee parking lot in downtown Madison, Wisconsin. This type of parking lot was chosen for the test site because the constituent concentrations and particle-size distributions (PSDs) were expected to be similar to those of a typical park-and-ride lot operated by WisDOT. The objective of this particular installation was to reduce loads of total suspended solids (TSS) in stormwater runoff to Lake Monona. This study also was designed to provide a range of treatment efficiencies expected for a SFD. Samples from the inlet and outlet were analyzed for 33 organic and inorganic constituents, including 18 polycyclic aromatic hydrocarbons (PAHs). Samples were also analyzed for physical properties, including PSD. Water-quality samples were collected for 51 runoff events from November 2005 to August 2007. Samples from all runoff events were analyzed for concentrations of suspended sediment (SS). Samples from 31 runoff events were analyzed for 15 constituents, samples from 15 runoff events were analyzed for PAHs, and samples from 36 events were analyzed for PSD. The treatment efficiency of the SFD was calculated using the summation of loads (SOL) and the efficiency ratio methods. Constituents for which the concentrations and (or) loads were decreased by the SFD include TSS, SS, volatile suspended solids, total phosphorous (TP), total copper, total zinc, and PAHs. The efficiency ratios for these constituents are 45, 37, 38, 55, 22, 5, and 46 percent, respectively. The SOLs for these constituents are 32, 37, 28, 36, 23, 8, and 48 percent, respectively. The SOL for chloride was -21 and the efficiency ratio was -18. Six chemical constituents or properties-dissolved phosphorus, chemical oxygen demand, dissolved zinc, total dissolved solids, dissolved chemical oxygen demand, and

  8. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    Science.gov (United States)

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that

  9. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  10. Water in the Balance: A Parking Lot Story

    Science.gov (United States)

    Haas, N. A.; Vitousek, S.

    2017-12-01

    The greater Chicagoland region has seen a high degree of urbanization since 1970. For example, between 1970-1990 the region experienced 4% population growth, a 35% increase in urban land use, and approximately 454 square miles of agricultural land was mostly converted into urban uses. Transformation of land into urban uses in the Chicagoland region has altered the stream and catchment response to rainfall events, specifically an increase in stream flashiness and increase in urban flooding. Chicago has begun to address these changes through green infrastructure. To understand the impact of green infrastructure at local, city-wide, and watershed scales, individual projects need to be accurately and sufficiently modeled. A traditional parking lot conversion into a porous parking lot at the University of Illinois at Chicago was modeled using SWMM and scrutinized using field data to look at stormwater runoff and water balance prior and post reconstruction. SWMM modeling suggested an 87% reduction in peak flow as well as a 100% reduction in flooding for a 24 hour, 1.72-inch storm. For the same storm, field data suggest an 89% reduction in peak flow as well as a 100% reduction in flooding. Modeling suggested 100% reductions in flooding for longer duration storms (24 hour+) and a smaller reduction in peak flow ( 66%). The highly parameterized SWMM model agrees well with collected data and analysis. Further effort is being made to use data mining to create correlations within the collected datasets that can be integrated into a model that follows a standardized formation process and reduces parameterization.

  11. 7 CFR 56.37 - Lot marking of officially identified shell eggs.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Lot marking of officially identified shell eggs. 56.37... AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT (CONTINUED) VOLUNTARY GRADING OF SHELL EGGS Grading of Shell Eggs Identifying and Marking Products § 56.37...

  12. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  13. i-Genome: A database to summarize oligonucleotide data in genomes

    Directory of Open Access Journals (Sweden)

    Chang Yu-Chung

    2004-10-01

    Full Text Available Abstract Background Information on the occurrence of sequence features in genomes is crucial to comparative genomics, evolutionary analysis, the analyses of regulatory sequences and the quantitative evaluation of sequences. Computing the frequencies and the occurrences of a pattern in complete genomes is time-consuming. Results The proposed database provides information about sequence features generated by exhaustively computing the sequences of the complete genome. The repetitive elements in the eukaryotic genomes, such as LINEs, SINEs, Alu and LTR, are obtained from Repbase. The database supports various complete genomes including human, yeast, worm, and 128 microbial genomes. Conclusions This investigation presents and implements an efficiently computational approach to accumulate the occurrences of the oligonucleotides or patterns in complete genomes. A database is established to maintain the information of the sequence features, including the distributions of oligonucleotide, the gene distribution, the distribution of repetitive elements in genomes and the occurrences of the oligonucleotides. The database can provide more effective and efficient way to access the repetitive features in genomes.

  14. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  15. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  16. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  17. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  18. AD620SQ/883B Total Ionizing Dose Radiation Lot Acceptance Report for RESTORE-LEO

    Science.gov (United States)

    Burton, Noah; Campola, Michael

    2017-01-01

    A Radiation Lot Acceptance Test was performed on the AD620SQ/883B, Lot 1708D, in accordance with MIL-STD-883, Method 1019, Condition D. Using a Co-60 source 4 biased parts and 4 unbiased parts were irradiated at 10 mrad/s (0.036 krad/hr) in intervals of approximately 1 krad from 3-10 krads, and ones of 5 krads from 10-25 krads, where it was annealed while unbiased at 25 degrees Celsius, for 2 days, and then, subsequently, annealed while biased at 25 degrees celsius, for another 7 days.

  19. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  20. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  1. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  2. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  3. RTDB: A memory resident real-time object database

    International Nuclear Information System (INIS)

    Nogiec, Jerzy M.; Desavouret, Eugene

    2003-01-01

    RTDB is a fast, memory-resident object database with built-in support for distribution. It constitutes an attractive alternative for architecting real-time solutions with multiple, possibly distributed, processes or agents sharing data. RTDB offers both direct and navigational access to stored objects, with local and remote random access by object identifiers, and immediate direct access via object indices. The database supports transparent access to objects stored in multiple collaborating dispersed databases and includes a built-in cache mechanism that allows for keeping local copies of remote objects, with specifiable invalidation deadlines. Additional features of RTDB include a trigger mechanism on objects that allows for issuing events or activating handlers when objects are accessed or modified and a very fast, attribute based search/query mechanism. The overall architecture and application of RTDB in a control and monitoring system is presented

  4. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  5. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    Science.gov (United States)

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  6. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  7. Why gerontology and geriatrics can teach us a lot about mentoring.

    Science.gov (United States)

    Clark, Phillip G

    2018-05-15

    Gerontology, geriatrics, and mentoring have a lot in common. The prototype of this role was Mentor, an older adult in Homer's The Odyssey, who was enlisted to look after Odysseus' son, Telemachus, while his father was away fighting the Trojan War. Portrayed as an older man, the name "mentor" literally means "a man who thinks," which is not a bad characterization generally for faculty members in gerontology! In particular, gerontological and geriatrics education can teach us a lot about the importance of mentoring and provide some critical insights into this role: (1) the importance of interprofessional leadership and modeling, (2) the application of the concept of "grand-generativity" to mentoring, (3) "it takes a community" to be effective in mentoring others, and (4) the need to tailor mentorship styles to the person and the situation. This discussion explores these topics and argues that gerontological and geriatrics educators have a particularly important role and responsibility in mentoring students, colleagues, and administrators related to the very future of our field.

  8. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  9. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  10. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  11. Semantics-based publication management using RSS and FOAF

    NARCIS (Netherlands)

    Mika, Peter; Klein, Michel; Serban, Radu

    2005-01-01

    Listing references to scientific publications on personal or group homepages is a common practice. Doing this in a consistent and structured manner either requires a lot of discipline or a centralized database. Scientific publication, however, is a distributed activity by nature. We present a

  12. Three Permeable Pavements Performances for Priority Metal Pollutants and Metals Associated with Deicing Chemicals from Edison Parking Lot, NJ

    Science.gov (United States)

    The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements [permeable interlocking concrete pavers (PICP), pervious concrete (PC), and porous asphalt (PA)]. Samples of each p...

  13. Egg and a lot of science: an interdisciplinary experiment

    OpenAIRE

    Gayer, M. C.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; T., Rodrigues D.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; Denardin, E. L.G.; Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil; Roehrs, R.; Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, Brazil

    2014-01-01

    Egg and a lot of science: an interdisciplinary experimentGayer, M.C.1,2;Rodrigues, D.T.1,2; Escoto, D.F.1; Denardin, E.L.G.2, Roehrs, R.1,21Interdisciplinary Research Group on Teaching Practice, Graduate Program in Biochemistry, Unipampa, RS, Brazil2Laboratory of Physicochemical Studies and Natural Products, Post Graduate Program in Biochemistry, Unipampa, RS, BrazilIntroduction: How to tell if an egg is rotten? How to calculate the volume of an egg? Because the rotten egg float? Why has this...

  14. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  15. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  16. Development of Database and Lecture Book for Nuclear Water Chemistry

    International Nuclear Information System (INIS)

    Maeng, Wan Young; Kim, U. C.; Na, J. W.; Choi, B. S.; Lee, E. H.; Kim, K. H.; Kim, K. M.; Kim, S. H.; Im, K. S.

    2010-02-01

    In order to establish a systematic and synthetic knowledge system of nuclear water chemistry, we held nuclear water chemistry experts group meetings. We discussed the way of buildup and propagation of nuclear water chemistry knowledge with domestic experts. We obtained a lot of various opinions that made the good use of this research project. The results will be applied to continuous buildup of domestic nuclear water chemistry knowledge database. Lessons in water chemistry of nuclear power plants (NPPs) have been opened in Nuclear Training and education Center, KAERI to educate the new generation who are working and will be working at the department of water chemistry of NPPs. The lessons were 17 and lesson period was from 12th May through 5th November. In order to progress the programs, many water chemistry experts were invited. They gave lectures to the younger generation once a week for 2 h about their experiences obtained during working on water chemistry of NPPs. The number of attendance was 290. The lessons were very effective and the lesson data will be used to make database for continuous use

  17. A multi-phase algorithm for a joint lot-sizing and pricing problem with stochastic demands

    DEFF Research Database (Denmark)

    Jenny Li, Hongyan; Thorstenson, Anders

    2014-01-01

    to a practically viable approach to decision-making. In addition to incorporating market uncertainty and pricing decisions in the traditional production and inventory planning process, our approach also accommodates the complexity of time-varying cost and capacity constraints. Finally, our numerical results show......Stochastic lot-sizing problems have been addressed quite extensively, but relatively few studies also consider marketing factors, such as pricing. In this paper, we address a joint stochastic lot-sizing and pricing problem with capacity constraints and backlogging for a firm that produces a single...... that the multi-phase heuristic algorithm solves the example problems effectively....

  18. The UDEPO database of the International Atomic Energy Agency

    International Nuclear Information System (INIS)

    Bruneton, P.

    2009-01-01

    The author presents the work performed and data collected by the IAEA on uranium deposits in the world. Several documents have been published: a map called 'World Distribution of Uranium Deposits', a guidebook to the map with brief descriptions of 582 deposits. These deposits have been classified according to 14 different types which led to the development of a database, UDEPO (World Distribution of Uranium Deposits). As uranium exploration activities started again, new data have been published in 2003. A web site has been created. In 2009, 1176 deposits were present in the database along with many geographical, geological and technical parameters. Maps, photos, plans and drawings may also be present in the database. But some data are either not present because of the will of some countries, or not verified

  19. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  20. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  1. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  2. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  3. Building Vietnamese Herbal Database Towards Big Data Science in Nature-Based Medicine

    Science.gov (United States)

    2018-01-04

    online and hard-copied references). Text mining is planned before DISTRIBUTION A. Approved for public release: distribution unlimited. hand in the...many types of diseases. Poor hand-writing records and current text -based databases, however, perplex the conventionalizing and evaluating process of...remedy for many types of diseases. Poor hand-writing records and current text -based databases, however, perplex the conventionalizing and evaluating

  4. 2018-02-27T13:24:15Z https://www.ajol.info/index.php/all/oai oai:ojs ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  5. Sequencing, lot sizing and scheduling in job shops: the common cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Boctor, F.F.

    1998-01-01

    This paper deals with the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the objective is to minimize the sum of setup and inventory holding costs while satisfying the demand with no backlogging. To solve this problem, we

  6. Aligning workload control theory and practice : lot splitting and operation overlapping issues

    NARCIS (Netherlands)

    Fernandes, Nuno O.; Land, Martin J.; Carmo-Silva, S.

    2016-01-01

    This paper addresses the problem of lot splitting in the context of workload control (WLC). Past studies on WLC assumed that jobs released to the shop floor proceed through the different stages of processing without being split. However, in practice, large jobs are often split into smaller transfer

  7. Performance of engineered soil and trees in a parking lot bioswale

    Science.gov (United States)

    Qingfu Xiao; Gregory McPherson

    2011-01-01

    A bioswale integrating an engineered soil and trees was installed in a parking lot to evaluate its ability to reduce storm runoff, pollutant loading, and support tree growth. The adjacent control and treatment sites each received runoff from eight parking spaces and were identical except that there was no bioswale for the control site. A tree was planted at both sites...

  8. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  9. Assessing Species Distribution Using Google Street View: A Pilot Study with the Pine Processionary Moth

    Science.gov (United States)

    Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675

  10. Assessing species distribution using Google Street View: a pilot study with the Pine Processionary Moth.

    Science.gov (United States)

    Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.

  11. Materials data through a bibliographic database INIS

    International Nuclear Information System (INIS)

    Yamamoto, Akira; Itabashi, Keizo; Nakajima, Hidemitsu

    1992-01-01

    INIS (International Nuclear Information System) is a bibliographic database produced by collaboration of IAEA and its member countries, holding 1,500,000 records as of 1991. Although a bibliographic database does not provide numerical data itself, specific materials information can be obtained through retrieval specifying materials, properties conditions, measuring methods, etc. Also, 'data flagging' facilitates searching a record containing data. INIS has also a function of clearing house that provides original documents of scarce distribution. Hard copies of the technical reports or other non-conventional literatures are available. An efficient use of INIS database for the materials data is presented using an on-line terminal. (author)

  12. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high

  13. A database application for the Naval Command Physical Readiness Testing Program

    OpenAIRE

    Quinones, Frances M.

    1998-01-01

    Approved for public release; distribution is unlimited 1T21 envisions a Navy with tandardized, state-of-art computer systems. Based on this vision, Naval database management systems will also need to become standardized among Naval commands. Today most commercial off the shelf (COTS) database management systems provide a graphical user interface. Among the many Naval database systems currently in use, the Navy's Physical Readiness Program database has continued to exist at the command leve...

  14. Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45

    Science.gov (United States)

    Chin, Liem; Chendra, Erwinna; Sukmana, Agus

    2018-01-01

    To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.

  15. Three Permeable Pavements Performances for Priority Metal Pollutants and Metals associated with Deicing Chemicals from Edison Parking Lot, NJ - abstract

    Science.gov (United States)

    The U.S. Environmental Protection Agency constructed a 4000-m2 parking lot in Edison, New Jersey in 2009. The parking lot is surfaced with three permeable pavements [permeable interlocking concrete pavers (PICP), pervious concrete (PC), and porous asphalt (PA)]. Samples of each p...

  16. A Methodology for Distributing the Corporate Database.

    Science.gov (United States)

    McFadden, Fred R.

    The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…

  17. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  18. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    Peer-to-peer databases are becoming more prevalent on the internet for sharing and distributing applications, documents, files, and other digital media. The problem associated with answering large-scale ad hoc analysis queries, aggregation queries, on these databases poses unique challenges. This paper presents an ...

  19. The design and implementation of embedded database in HIRFL-CSR

    International Nuclear Information System (INIS)

    Xu Yang; Liu Wufeng; Long Yindong; Chinese Academy of Sciences, Beijing; Qiao Weimin; Guo Yuhui

    2008-01-01

    This article introduces design and implementation of embedded database for control system in HIRFL-CSR. The control system has three level database for centralized-manage and distributed control. Level I and level II are based on Windows, connecting with each other by ODBC of Oracle database. Level III according to the control system is based on Embedded-Linux to communicate with others by SQLite database engine. The overall control room sends wave-data, event-table and other data to front embedded database in advance, the embedded database SQLite relay the data to wave generator DSP while experimentation. On the control of synchronization trigger, the DSP generates wave-data for the control of power supply and magnetic field. (authors)

  20. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  1. Using Large Diabetes Databases for Research.

    Science.gov (United States)

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  2. Rapid assessment of antimicrobial resistance prevalence using a Lot Quality Assurance sampling approach

    NARCIS (Netherlands)

    van Leth, Frank; den Heijer, Casper; Beerepoot, Marielle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance

    2017-01-01

    Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates

  3. Intrusion Detection and Marking Transactions in a Cloud of Databases Environment

    OpenAIRE

    Syrine Chatti; Habib Ounelli

    2016-01-01

    The cloud computing is a paradigm for large scale distributed computing that includes several existing technologies. A database management is a collection of programs that enables you to store, modify and extract information from a database. Now, the database has moved to cloud computing, but it introduces at the same time a set of threats that target a cloud of database system. The unification of transaction based application in these environments present also a set of vulnerabilities and th...

  4. A lot to look forward to

    CERN Multimedia

    2013-01-01

    CERN moves from momentous year to momentous year, and although 2013 will be very different for us than 2012, there is still a lot to look forward to. As I write, the proton-lead run is just getting under way, giving the LHC experiments a new kind of data to investigate. But the run will be short, and our main activity this year will be the start of the LHC’s first long shutdown.   This is the first year I can remember in which all of CERN’s accelerators will be off. The reason is that there is much to be done: the older machines need maintenance, and the LHC has to be prepared for higher energy running. That involves opening up the interconnections between each of the machine’s 1,695 main magnet cryostats, consolidating all of the 10,170 splices carrying current to the main dipole and quadrupole windings, and a range of other work to improve the machine. The CERN accelerator complex will start to come back to life in 2014, and it’s fair to say that when...

  5. Perception that "everything requires a lot of effort": transcultural SCL-25 item validation.

    Science.gov (United States)

    Moreau, Nicolas; Hassan, Ghayda; Rousseau, Cécile; Chenguiti, Khalid

    2009-09-01

    This brief report illustrates how the migration context can affect specific item validity of mental health measures. The SCL-25 was administered to 432 recently settled immigrants (220 Haitian and 212 Arabs). We performed descriptive analyses, as well as Infit and Outfit statistics analyses using WINSTEPS Rasch Measurement Software based on Item Response Theory. The participants' comments about the item You feel everything requires a lot of effort in the SCL-25 were also qualitatively analyzed. Results revealed that the item You feel everything requires a lot of effort is an outlier and does not adjust in an expected and valid fashion with its cluster items, as it is over-endorsed by Haitian and Arab healthy participants. Our study thus shows that, in transcultural mental health research, the cultural and migratory contexts may interact and significantly influence the meaning of some symptom items and consequently, the validity of symptom scales.

  6. A review of lot streaming in a flow shop environment with makespan criteria

    Directory of Open Access Journals (Sweden)

    Pedro Gómez-Gasquet

    2013-07-01

    Full Text Available Purpose: This paper reviews current literature and contributes a set of findings that capture the current state-of-the-art of the topic of lot streaming in a flow-shop. Design/methodology/approach: A literature review to capture, classify and summarize the main body of knowledge on lot streaming in a flow-shop with makespan criteria and, translate this into a form that is readily accessible to researchers and practitioners in the more mainstream production scheduling community. Findings and Originality/value: The existing knowledge base is somewhat fragmented. This is a relatively unexplored topic within mainstream operations management research and one which could provide rich opportunities for further exploration. Originality/value: This paper sets out to review current literature, from an advanced production scheduling perspective, and contributes a set of findings that capture the current state-of-the-art of this topic.

  7. Hyperdatabase: A schema for browsing multiple databases

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, M A [Dalhousie Univ., Halifax (Canada). Computer Science Div.; Watters, C R [Waterloo Univ., Waterloo (Canada). Computer Science Dept.

    1990-05-01

    In order to insure effective information retrieval, a user may need to search multiple databases on multiple systems. Although front end systems have been developed to assist the user in accessing different systems, they access one retrieval system at a time and the search has to be repeated for each required database on each retrieval system. More importantly, the user interacts with the results as independent sessions. This paper models multiple bibliographic databases distributed over one or more retrieval systems as a hyperdatabase, i.e., a single virtual database. The hyperdatabase is viewed as a hypergraph in which each node represents a bibliographic item and the links among nodes represent relations among the items. In the response to a query, bibliographic items are extracted from the hyperdatabase and linked together to form a transient hypergraph. This hypergraph is transient in the sense that it is ``created`` in response to a query and only ``exists`` for the duration of the query session. A hypertext interface permits the user to browse the transient hypergraph in a nonlinear manner. The technology to implement a system based on this model is available now, consisting of powerful workstation, distributed processing, high-speed communications, and CD-ROMs. As the technology advances and costs decrease such systems should be generally available. (author). 13 refs, 5 figs.

  8. Hyperdatabase: A schema for browsing multiple databases

    International Nuclear Information System (INIS)

    Shepherd, M.A.; Watters, C.R.

    1990-05-01

    In order to insure effective information retrieval, a user may need to search multiple databases on multiple systems. Although front end systems have been developed to assist the user in accessing different systems, they access one retrieval system at a time and the search has to be repeated for each required database on each retrieval system. More importantly, the user interacts with the results as independent sessions. This paper models multiple bibliographic databases distributed over one or more retrieval systems as a hyperdatabase, i.e., a single virtual database. The hyperdatabase is viewed as a hypergraph in which each node represents a bibliographic item and the links among nodes represent relations among the items. In the response to a query, bibliographic items are extracted from the hyperdatabase and linked together to form a transient hypergraph. This hypergraph is transient in the sense that it is ''created'' in response to a query and only ''exists'' for the duration of the query session. A hypertext interface permits the user to browse the transient hypergraph in a nonlinear manner. The technology to implement a system based on this model is available now, consisting of powerful workstation, distributed processing, high-speed communications, and CD-ROMs. As the technology advances and costs decrease such systems should be generally available. (author). 13 refs, 5 figs

  9. Where the bugs are: analyzing distributions of bacterial phyla by descriptor keyword search in the nucleotide database.

    Science.gov (United States)

    Squartini, Andrea

    2011-07-26

    The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.

  10. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  11. Database management system for large container inspection system

    International Nuclear Information System (INIS)

    Gao Wenhuan; Li Zheng; Kang Kejun; Song Binshan; Liu Fang

    1998-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the Customs to check the contents inside a large container without opening it. The author has discussed a database application system, as a part of Signal and Image System (SIS), for the LCIS. The basic requirements analysis was done first. Then the selections of computer hardware, operating system, and database management system were made according to the technology and market products circumstance. Based on the above considerations, a database application system with central management and distributed operation features has been implemented

  12. Joint Economic Lot Sizing Optimization in a Supplier-Buyer Inventory System When the Supplier Offers Decremental Temporary Discounts

    Directory of Open Access Journals (Sweden)

    Diana Puspita Sari

    2012-02-01

    Full Text Available This research discusses mathematical models of joint economic lot size optimization in a supplier-buyer inventory system in a situation when the supplier offers decremental temporary discounts during a sale period. Here, the sale period consists of n phases and the phases of discounts offered descend as much as the number of phases. The highest discount will be given when orders are placed in the first phase while the lowest one will be given when they are placed in the last phase. In this situation, the supplier attempts to attract the buyer to place orders as early as possible during the sale period. The buyers will respon these offers by ordering a special quantity in one of the phase. In this paper, we propose such a forward buying model with discount-proportionally-distributed time phases. To examine the behaviour of the proposed model, we conducted numerical experiments. We assumed that there are three phases of discounts during the sale period. We then compared the total joint costs of special order placed in each phase for two scenarios. The first scenario is the case of independent situation – there is no coordination between the buyer and the supplie-, while the second scenario is the opposite one, the coordinated model. Our results showed the coordinated model outperform the independent model in terms of producing total joint costs. We finally conducted a sensitivity analyzis to examine the other behaviour of the proposed model. Keywords: supplier-buyer inventory system, forward buying model, decremental temporary discounts, joint economic lot sizing, optimization.

  13. Meta-Heuristics for Dynamic Lot Sizing: a review and comparison of solution approaches

    NARCIS (Netherlands)

    R.F. Jans (Raf); Z. Degraeve (Zeger)

    2004-01-01

    textabstractProofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search,

  14. Detection and genetic identification of pestiviruses in Brazilian lots of fetal bovine serum collected from 2006 to 2014

    Directory of Open Access Journals (Sweden)

    Francielle L. Monteiro

    Full Text Available ABSTRACT: The present study performed a genetic identification of pestiviruses contaminating batches of fetal bovine serum (FBS produced in Brazil from 2006 to 2014. Seventy-three FBS lots were screened by a RT-PCR targeting the 5’untranslated region (UTR of the pestivirus genome. Thirty-nine lots (53.4% were positive for pestivirus RNA and one contained infectious virus. Nucleotide sequencing and phylogenetic analysis of the 5’UTR revealed 34 lots (46.6% containing RNA of bovine viral diarrhea virus type 1 (BVDV-1, being 23 BVDV-1a (5’ UTR identity 90.8-98.7%, eight BVDV-1b (93.9-96.7% and three BVDV-1d (96.2- 97.6%. Six lots (8.2% contained BVDV-2 (90.3-100% UTR identity being two BVDV-2a; three BVDV-2b and one undetermined. Four FBS batches (5.5% were found contaminated with HoBi-like virus (98.3 to 100%. Five batches (6.8% contained more than one pestivirus. The high frequency of contamination of FBS with pestivirus RNA reinforce the need for systematic and updated guidelines for monitoring this product to reduce the risk of contamination of biologicals and introduction of contaminating agents into free areas.

  15. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  16. Report on the basic design of the FY 1998 technical information database; 1998 nendo gijutsu joho database no kihon sekkei hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    For the purpose of promoting the effective transfer of the results of technical development and the technical information distribution, study and basic design were conducted of a concept of the new technical database run by NEDO. In the study, the following were conducted to extract the subjects: analysis of examples of project databases such as CADDET, scientific technology research information (JCLEARING), evaluation/survey of the present situation of database users, etc. In the study of a concept of technical information database, clarified were the classification of information, definition of the details, finding-out of relations between various kinds of information, mechanism of collection/processing/storage/dispatch/exchange of information, role of NEDO, etc. In the basic design, as the NEDO technical information database viable for the moment, the rough design was conducted of the project information, information on result reports, and information on research institutes. Further, to handle in unity the database controlled in different method, studied were the applicability to the project database of the general combined concept, advantages, restricted items, etc. (NEDO)

  17. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  18. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  19. Programming database tools for the casual user

    International Nuclear Information System (INIS)

    Katz, R.A; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database management system (INTERBASE) for the storage of all data associated with the control of the particle accelerator complex. This includes the static data which describes the component devices of the complex, as well as data for application program startup and data records that are used in analysis. Due to licensing restraints, it was necessary to develop tools to allow programs requiring access to a database to be unconcerned whether or not they were running on a licensed node. An in-house database server program was written, using Apollo mailbox communication protocols, allowing application programs via calls to this server to access the interbase database. Initially, the tools used by the server to actually access the database were written using the GDML C host language interface. Through the evolutionary learning process these tools have been converted to Dynamic SQL. Additionally, these tools have been extracted from the exclusive province of the database server and placed in their own library. This enables application programs to use these same tools on a licensed node without using the database server and without having to modify the application code. The syntax of the C calls remain the same

  20. National Radiobiology Archives Distributed Access user's manual

    International Nuclear Information System (INIS)

    Watson, C.; Smith, S.; Prather, J.

    1991-11-01

    This User's Manual describes installation and use of the National Radiobiology Archives (NRA) Distributed Access package. The package consists of a distributed subset of information representative of the NRA databases and database access software which provide an introduction to the scope and style of the NRA Information Systems

  1. Prototyping visual interface for maintenance and supply databases

    OpenAIRE

    Fore, Henry Ray

    1989-01-01

    Approved for public release; distribution is unlimited This research examined the feasibility of providing a visual interface to standard Army Management Information Systems at the unit level. The potential of improving the Human-Machine Interface of unit level maintenance and supply software, such as ULLS (Unit Level Logistics System), is very attractive. A prototype was implemented in GLAD (Graphics Language for Database). GLAD is a graphics object-oriented environment for databases t...

  2. Performance analysis of a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with Optimistic Concurrency Control (OCC), an approximation for the transaction response-time distribution and thus for the deadline miss probability is obtained. Transactions arrive at the database according to a Poisson process. There is a limited number of

  3. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  4. Update of the database of photovoltaic installations in the UK

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, D.; Bruhns, H.

    1999-07-01

    The article describes an updated database of photovoltaic (PV) installations in the UK. The database contains more than 300 records representing over 40,000 photovoltaic installations with more than 100 buildings that use photovoltaic arrays. Figures show: (i) a chart of cumulative PV applications to date; (ii) a chart of cumulative installations in the database; (iii) the growth of Building Integrated PV installed to date; (iv) the cumulative growth of peak power of PV for buildings installed every year since 1985; (v) the distribution by application of all PV installations in the database and (vi) the various applications of PV installations.

  5. Study of data I/O performance on distributed disk system in mask data preparation

    Science.gov (United States)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  6. How well do we understand nitrous oxide emissions from open-lot cattle systems?

    Science.gov (United States)

    Nitrous oxide is an important greenhouse gas that is produced in manure. Open lot beef cattle feedyards emit nitrous oxide but little information is available about exactly how much is produced. This has become an important research topic because of environmental concerns. Only a few methods are ava...

  7. Design of special purpose database for credit cooperation bank business processing network system

    Science.gov (United States)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  8. Bioinformatics Database Tools in Analysis of Genetics of Neurodevelopmental Disorders

    Directory of Open Access Journals (Sweden)

    Dibyashree Mallik

    2017-10-01

    Full Text Available Bioinformatics tools are recently used in various sectors of biology. Many questions regarding Neurodevelopmental disorder which arises as a major health issue recently can be solved by using various bioinformatics databases. Schizophrenia is such a mental disorder which is now arises as a major threat in young age people because it is mostly seen in case of people during their late adolescence or early adulthood period. Databases like DISGENET, GWAS, PHARMGKB, and DRUGBANK have huge repository of genes associated with schizophrenia. We found a lot of genes are being associated with schizophrenia, but approximately 200 genes are found to be present in any of these databases. After further screening out process 20 genes are found to be highly associated with each other and are also a common genes in many other diseases also. It is also found that they all are serves as a common targeting gene in many antipsychotic drugs. After analysis of various biological properties, molecular function it is found that these 20 genes are mostly involved in biological regulation process and are having receptor activity. They are belonging mainly to receptor protein class. Among these 20 genes CYP2C9, CYP3A4, DRD2, HTR1A, HTR2A are shown to be a main targeting genes of most of the antipsychotic drugs and are associated with  more than 40% diseases. The basic findings of the present study enumerated that a suitable combined drug can be design by targeting these genes which can be used for the better treatment of schizophrenia.

  9. Tracking Traffickers. The IAEA Incident and Trafficking Database

    International Nuclear Information System (INIS)

    Webb, Greg

    2013-01-01

    Radioactive material is missing from a hospital. Contaminated metal is found in a scrap yard. Smugglers try to peddle nuclear- weapon-usable material. These different scenarios illustrate the risks that these materials can pose to human safety and security. To assess those risks and to develop strategies to reduce them, States must understand the implications and the scope of such incidents that are occurring around the world. To better understand and respond to these events, the IAEA maintains an Incident and Trafficking Database (ITDB) which collects information from 122 participating States and some select international organizations. They are asked to share data on a voluntary basis about incidents in which nuclear and other radioactive material has fallen ''out of regulatory control.'' This could mean reporting cases of material that has gone missing, or discoveries of material where none was expected. The cases range from the innocent misplacement of industrial radioactive sources to criminal smuggling efforts which could aid terrorist acts. This information is shared among ITDB participants, and IAEA analysts try to identify trends and characteristics that could help prevent the misuse of these potentially dangerous materials. ''The ITDB has become an internationally recognized tool for States to study the extent and nature of these incidents,'' said John Hilliard, head of the Information Management and Coordination Section that administers the database. ''We've learned a lot by studying them, and we hope the information helps us prevent accidents or crimes in the future.'' The IAEA established the database in 1995 after States became alarmed by a growing number of trafficking incidents in the early 1990s. The service was originally operated by the Department of Safeguards, but later moved to the Department of Nuclear Safety and Security, where the Office of Nuclear Security now administers all the data collection and analysis

  10. Development of reliability database for safety-related I and C component based on operating experience of KSNP

    International Nuclear Information System (INIS)

    Jang, S. C.; Han, S. H.; Min, K. R.

    2001-01-01

    Reliability database for safety-related I and C components has been developed, based on domestic operating experience of total 8.63 years from four units-Yonggwang Units 3 and 4, and Ulchin Units 3 and 4. This plant-specific data of safety-related I and C components has compared with operating experience for CE-supplied plants in U.S.A. As a results, we found that on the whole the domestic reliability data was similar to CE-supplied plants in USA, through lots of failures occurred early in the commercial operation were included in our analyses without percolation

  11. Design Schematics for a Sustainable Parking Lot: Building 2-2332, ENRD Classroom, Fort Bragg, NC

    National Research Council Canada - National Science Library

    Stumpf, Annette

    2003-01-01

    ...) was tasked with planning a sustainable design "charrette" to explore and develop alternative parking lot designs that would meet Fort Bragg's parking needs, as well as its need to meet sustainable...

  12. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  13. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  14. Evaluating the Capability of Grass Swale for the Rainfall Runoff Reduction from an Urban Parking Lot, Seoul, Korea

    OpenAIRE

    Muhammad Shafique; Reeho Kim; Kwon Kyung-Ho

    2018-01-01

    This field study elaborates the role of grass swale in the management of stormwater in an urban parking lot. Grass swale was constructed by using different vegetations and local soil media in the parking lot of Mapu-gu Seoul, Korea. In this study, rainfall runoff was first retained in soil and the vegetation layers of the grass swale, and then infiltrated rainwater was collected with the help of underground perforated pipe, and passed to an underground storage trench. In this way, grass swale...

  15. COPEPOD: The Coastal & Oceanic Plankton Ecology, Production, & Observation Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal & Oceanic Plankton Ecology, Production, & Observation Database (COPEPOD) provides NMFS scientists with quality-controlled, globally distributed...

  16. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  17. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  18. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  19. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  20. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  1. Native Pig and Chicken Breed Database: NPCDB

    Directory of Open Access Journals (Sweden)

    Hyeon-Soo Jeong

    2014-10-01

    Full Text Available Indigenous (native breeds of livestock have higher disease resistance and adaptation to the environment due to high genetic diversity. Even though their extinction rate is accelerated due to the increase of commercial breeds, natural disaster, and civil war, there is a lack of well-established databases for the native breeds. Thus, we constructed the native pig and chicken breed database (NPCDB which integrates available information on the breeds from around the world. It is a nonprofit public database aimed to provide information on the genetic resources of indigenous pig and chicken breeds for their conservation. The NPCDB (http://npcdb.snu.ac.kr/ provides the phenotypic information and population size of each breed as well as its specific habitat. In addition, it provides information on the distribution of genetic resources across the country. The database will contribute to understanding of the breed’s characteristics such as disease resistance and adaptation to environmental changes as well as the conservation of indigenous genetic resources.

  2. 3. A 40-years record of the polymetallic pollution of the Lot River system, France

    Science.gov (United States)

    Audry, S.; Schäfer, J.; Blanc, G.; Veschambre, S.; Jouanneau, J.-M.

    2003-04-01

    The Lot River system (southwest France) is known for historic Zn and Cd pollution that originates from Zn ore treatment in the small Riou-Mort watershed and affects seafood production in the Gironde Estuary. We present a sedimentary record from 2 cores taken in a dam lake downstream of the Riou-Mort watershed covering the evolution of metal inputs into the Lot River over the past 40 years (1960-2001). Depth profiles of Cd, Zn, Cu and Pb concentrations are comparable indicating common sources and transport. The constant Zn/Cd ratio (˜50) observed in the sediment cores is similar to that in SPM from the Riou-Mort watershed, indicating the dominance of point source pollution upon the geochemical background signal. Cadmium, Zn, Cu and Pb concentrations in the studied sediment cores show an important peak in 42-44 cm depth with up to 300 mg.kg-1 (Cd), 10,000 mg.kg-1 (Zn), 150 mg.kg-1 (Cu) and 930 mg.kg-1 (Pb). These concentrations are much higher than geochemical background values; For example, Cd concentrations are more than 350-fold higher than those measured in the same riverbed upstream the confluence with the Riou-Mort River. This peak coincides with the upper 137Cs peak resulting from the Chernobyl accident (1986). Therefore, this heavy metal peak is attributed to the latest accidental Cd pollution of the Lot-River in 1986. Several downward heavy metal peaks reflect varying input probably due to changes in industrial activities within the Riou-Mort watershed. Given mean sedimentation rate of about 2 cm.yr-1, the record suggests constant and much lower heavy metal concentrations since the early nineties due to restriction of industrial activities and remediation efforts in the Riou-Mort watershed. Nevertheless, Cd, Zn, Cu and Pb concentrations in the upper sediment remain high, compared to background values from reference sites in the upper Lot River system.

  3. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  4. Demonstration Assessment of Light-Emitting Diode (LED) Parking Lot Lighting in Leavenworth, KS

    Energy Technology Data Exchange (ETDEWEB)

    Myer, Michael; Kinzey, Bruce R.; Curry, Ku' uipo

    2011-05-06

    This report describes the process and results of a demonstration of solid-state lighting (SSL) technology in a commercial parking lot lighting application, under the U.S. Department of Energy (DOE) Solid-State Lighting Technology GATEWAY Demonstration Program. The parking lot is for customers and employees of a Walmart Supercenter in Leavenworth, Kansas and this installation represents the first use of the LED Parking Lot Performance Specification developed by the DOE’s Commercial Building Energy Alliance. The application is a parking lot covering more than a half million square feet, lighted primarily by light-emitting diodes (LEDs). Metal halide wall packs were installed along the building facade. This site is new construction, so the installed baseline(s) were hypothetical designs. It was acknowledged early on that deviating from Walmart’s typical design would reduce the illuminance on the site. Walmart primarily uses 1000W pulse-start metal halide (PMH) lamps. In order to provide a comparison between both typical design and a design using conventional luminaires providing a lower illuminance, a 400W PMH design was also considered. As mentioned already, the illuminance would be reduced by shifting from the PMH system to the LED system. The Illuminating Engineering Society of North America (IES) provides recommended minimum illuminance values for parking lots. All designs exceeded the recommended illuminance values in IES RP-20, some by a wider margin than others. Energy savings from installing the LED system compared to the different PMH systems varied. Compared to the 1000W PMH system, the LED system would save 63 percent of the energy. However, this corresponds to a 68 percent reduction in illuminance as well. In comparison to the 400W PMH system, the LED system would save 44 percent of the energy and provide similar minimum illuminance values at the time of relamping. The LED system cost more than either of the PMH systems when comparing initial costs

  5. Database Perspectives on Blockchains

    OpenAIRE

    Cohen, Sara; Zohar, Aviv

    2018-01-01

    Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the u...

  6. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  7. Report on the present situation of the FY 1998 technical literature database; 1998 nendo gijutsu bunken database nado genjo chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    To study database which contributes to the future scientific technology information distribution, survey/analysis were conducted of the present status of the service supply side. In the survey on the database trend, the trend of relations between DB producers and distributors was investigated. As a result, there were seen the increase in DB producers, expansion of internet/distribution/service, etc., and there were no changes in the U.S.-centered structure. Further, it was recognized that the DB service in the internet age now faces the time of change as seen in existing producers' response to internet, on-line service of primary information source, creation of new on-line service, etc. By the internet impact, the following are predicted for the future DB service: slump of producers without strong points and gateway type distributors, appearance of new types of DB service, etc. (NEDO)

  8. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  9. 78 FR 43753 - Inspection and Weighing of Grain in Combined and Single Lots

    Science.gov (United States)

    2013-07-22

    ... USGSA regulations for shiplots, unit trains, and lash barges. This final rule allows for breaks in... the loading of the lot must be reasonably continuous, with no consecutive break in loading to exceed... superseded; (iii) The location of the grain, if at rest, or the name(s) of the elevator(s) from which or into...

  10. A basic period approach to the economic lot scheduling problem with shelf life considerations

    NARCIS (Netherlands)

    Soman, C.A.; van Donk, D.P.; Gaalman, G.J.C.

    2004-01-01

    Almost all the research on the economic lot scheduling problem (ELSP) considering limited shelf life of products has assumed a common cycle approach and an unrealistic assumption of possibility of deliberately reducing the production rate. In many cases, like in food processing industry where

  11. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Directory of Open Access Journals (Sweden)

    Xiaohuan Yang

    2009-02-01

    Full Text Available The spatial distribution of population is closely related to land use and land cover (LULC patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B data integrated with a Pattern Decomposition Method (PDM and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM. The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  12. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Science.gov (United States)

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable. PMID:22399959

  13. Automatic pattern localization across layout database and photolithography mask

    Science.gov (United States)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  14. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  15. A DESIGN STUDY OF AN INNOVATIVE BARRIER SYSTEM FOR PERSONAL PARKING LOTS

    OpenAIRE

    BÖRKLÜ, Hüseyin; KALYON, Sadık

    2018-01-01

    The increase in the number of cars made it necessary to protectthe parking areas. This research includes a literature review aboutcommercially available barriers, which are arm barriers, rising bollards, chainbarriers, automatic and manual private barriers from the point of common andside-by-side parking lots. Their advantages and disadvantages are evaluated.After the literature review work, a design requirements list for a car parkprotector, which includes important and strong properties of ...

  16. Transgressions des frontières maritimes. Le cas des îlots du Dodécanèse

    Directory of Open Access Journals (Sweden)

    Evdokia Olympitou

    2009-01-01

    Full Text Available Dans ce texte on constate que les îlots de la Mer Égée, qui se trouvent à petite distance des îles peuplées, constituaient une aide complémentaire pour les habitants des îles voisines car ils leur offraient quelques arpents de terre de plus, pour la culture ou l'élevage, et quelques milles anglais de côtes pour la pêche. Sur ces petits bouts de terre, l'habitat n'a jamais eu de caractéristiques permanentes qui auraient pu entraîner la formation de localités durables. Pour surpasser la frontière maritime on demandait des savoir-faire et des moyens de navigation que même les sociétés insulaires qui n'ont jamais exploité la mer pour leur survie, c'est-à-dire celles des agriculteurs et des éleveurs qui ne voyageaient pas et qui n'étaient pas familiers à la mer, pouvaient cependant acquérir. Étant donné que l'intervention et l'usage humains sont les éléments qui forgent la physionomie de l'espace, la "grande" île peuplée a toujours été le point de référence de chaque îlot. Les îlots "collés" à l'île voisine, comme par exemple Telendos, Alimnia et Saria de Dodécanèse, suivaient le sort de leur voisin, à moins qu'une particularité -comme le cas de l'îlot de Gyali, ne lui ait donné un autre type d'évolution.

  17. Two parameter-tuned metaheuristic algorithms for the multi-level lot sizing and scheduling problem

    Directory of Open Access Journals (Sweden)

    S.M.T. Fatemi Ghomi

    2012-10-01

    Full Text Available This paper addresses the problem of lot sizing and scheduling problem for n-products and m-machines in flow shop environment where setups among machines are sequence-dependent and can be carried over. Many products must be produced under capacity constraints and allowing backorders. Since lot sizing and scheduling problems are well-known strongly NP-hard, much attention has been given to heuristics and metaheuristics methods. This paper presents two metaheuristics algorithms namely, Genetic Algorithm (GA and Imperialist Competitive Algorithm (ICA. Moreover, Taguchi robust design methodology is employed to calibrate the parameters of the algorithms for different size problems. In addition, the parameter-tuned algorithms are compared against a presented lower bound on randomly generated problems. At the end, comprehensive numerical examples are presented to demonstrate the effectiveness of the proposed algorithms. The results showed that the performance of both GA and ICA are very promising and ICA outperforms GA statistically.

  18. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  19. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  20. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  1. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  2. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  3. The incidence of Grey Literature in online databases : a quantitative analysis

    OpenAIRE

    Luzi, Daniela (CNR-ISRDS); GreyNet, Grey Literature Network Service

    1994-01-01

    This study aims to verify the diffusion and distribution of Grey Literature (GL) documents in commercially available online databases. It has been undertaken due to the growing importance of GL in the field of information and documentation, on the one hand, and the increasing supply of online databases, on the other hand. The work is divided into two parts. The first provides the results of a previous quantitative analysis of databases containing GL documents. Using a top-down methodology, i....

  4. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  5. The finite horizon economic lot sizing problem in job shops : the multiple cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Bertrand, J.W.M.

    2001-01-01

    This paper addresses the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the planning horizon length is finite and fixed by management. The objective pursued is to minimize the sum of setup costs, and work-in-process and

  6. Alternate Methods of Effluent Disposal for On-Lot Home Sewage Systems. Special Circular 214.

    Science.gov (United States)

    Wooding, N. Henry

    This circular provides current information for homeowners who must repair or replace existing on-lot sewage disposal systems. Several alternatives such as elevated sand mounds, sand-lined beds and trenches and oversized absorption areas are discussed. Site characteristics and preparation are outlined. Each alternative is accompanied by a diagram…

  7. A distributed atomic physics database and modeling system for plasma spectroscopy

    International Nuclear Information System (INIS)

    Nash, J.K.; Liedahl, D.; Chen, M.H.; Iglesias, C.A.; Lee, R.W.; Salter, J.M.

    1995-08-01

    We are undertaking to develop a set of computational capabilities which will facilitate the access, manipulation, and understanding of atomic data in calculations of x-ray spectral modeling. In this present limited description we will emphasize the objectives for this work, the design philosophy, and aspects of the atomic database, as a more complete description of this work is available. The project is referred to as the Plasma Spectroscopy Initiative; the computing environment is called PSI, or the ''PSI shell'' since the primary interface resembles a UNIX shell window. The working group consists of researchers in the fields of x-ray plasma spectroscopy, atomic physics, plasma diagnostics, line shape theory, astrophysics, and computer science. To date, our focus has been to develop the software foundations, including the atomic physics database, and to apply the existing capabilities to a range of working problems. These problems have been chosen in part to exercise the overall design and implementation of the shell. For successful implementation the final design must have great flexibility since our goal is not simply to satisfy our interests but to vide a tool of general use to the community

  8. Report on the present situation of the FY 1998 technical literature database; 1998 nendo gijutsu bunken database nado genjo chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    To study database which contributes to the future scientific technology information distribution, survey/analysis were conducted of the present status of the service supply side. In the survey on the database trend, the trend of relations between DB producers and distributors was investigated. As a result, there were seen the increase in DB producers, expansion of internet/distribution/service, etc., and there were no changes in the U.S.-centered structure. Further, it was recognized that the DB service in the internet age now faces the time of change as seen in existing producers' response to internet, on-line service of primary information source, creation of new on-line service, etc. By the internet impact, the following are predicted for the future DB service: slump of producers without strong points and gateway type distributors, appearance of new types of DB service, etc. (NEDO)

  9. An update of the sorption database. Correction and addition of published literature data

    International Nuclear Information System (INIS)

    Saito, Yoshihiko; Suyama, Tadahiro; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Ochs, Michael

    2007-07-01

    Japan Nuclear Cycle Development Institute (JNC) had developed the sorption database (JNC-SDB) which includes distribution coefficient (K d ) data of important radioactive elements for bentonite and rocks in order to define a dataset to evaluate the safety function of retardation by natural barrier and engineered barrier in the H12 report. Then, JNC added to the database the sorption data from 1998 to 2003 collected by literature survey. In this report, Japan Atomic Energy Agency (JAEA) has updated the sorption database: (1) JAEA has widely collected the sorption data in order to extend the sorption database. The JNC-SDB has been added the published data which are not registered in the sorption database so far. (2) For the convenience of users the JNC-SDB was partially improved such as the automatic graph function. (3) Moreover, errors during data input in the part of the JNC-SDB were corrected on the basis of reviewing data in the database according to the guideline; 'evaluating and categorizing the reliability of distribution coefficient values in the sorption database'. In this updated JNC-SDB, 3,205 sorption data for 23 elements, which are important for performance assessment were included. The frequency of K d for some elements was clearly shown by addition of the sorption data. (author)

  10. Accessing the quark orbital angular momentum with Wigner distributions

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [IPNO, Universite Paris-Sud, CNRS/IN2P3, 91406 Orsay, France and LPT, Universite Paris-Sud, CNRS, 91406 Orsay (France); Pasquini, Barbara [Dipartimento di Fisica, Universita degli Studi di Pavia, Pavia, Italy and Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, Pavia (Italy)

    2013-04-15

    The quark orbital angular momentum (OAM) has been recognized as an important piece of the proton spin puzzle. A lot of effort has been invested in trying to extract it quantitatively from the generalized parton distributions (GPDs) and the transverse-momentum dependent parton distributions (TMDs), which are accessed in high-energy processes and provide three-dimensional pictures of the nucleon. Recently, we have shown that it is more natural to access the quark OAM from the phase-space or Wigner distributions. We discuss the concept of Wigner distributions in the context of quantum field theory and show how they are related to the GPDs and the TMDs. We summarize the different definitions discussed in the literature for the quark OAM and show how they can in principle be extracted from the Wigner distributions.

  11. Accessing the quark orbital angular momentum with Wigner distributions

    International Nuclear Information System (INIS)

    Lorcé, Cédric; Pasquini, Barbara

    2013-01-01

    The quark orbital angular momentum (OAM) has been recognized as an important piece of the proton spin puzzle. A lot of effort has been invested in trying to extract it quantitatively from the generalized parton distributions (GPDs) and the transverse-momentum dependent parton distributions (TMDs), which are accessed in high-energy processes and provide three-dimensional pictures of the nucleon. Recently, we have shown that it is more natural to access the quark OAM from the phase-space or Wigner distributions. We discuss the concept of Wigner distributions in the context of quantum field theory and show how they are related to the GPDs and the TMDs. We summarize the different definitions discussed in the literature for the quark OAM and show how they can in principle be extracted from the Wigner distributions.

  12. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  13. The GIOD Project-Globally Interconnected Object Databases

    CERN Document Server

    Bunn, J J; Newman, H B; Wilkinson, R P

    2001-01-01

    The GIOD (Globally Interconnected Object Databases) Project, a joint effort between Caltech and CERN, funded by Hewlett Packard Corporation, has investigated the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. A prototype small- scale LHC data analysis center has been constructed using computing resources at Caltechs Centre for advanced Computing Research (CACR). These resources include a 256 CPU HP Exemplar of ~4600 SPECfp95, a 600 TByte High Performance Storage System (HPSS), and local/wide area links based on OC3 ATM. Using the exemplar, a large number of fully simulated CMS events were produced, and used to populate an object database with a complete schema for raw, reconstructed and analysis objects. The reconstruction software used for this task was based on early codes developed in preparation for the current CMS reconstruction program, ORCA. (6 refs).

  14. WWW scattering matrix database for small mineral particles at 441.6 and 632.8 nm

    International Nuclear Information System (INIS)

    Volten, H.; Munoz, O.; Hovenier, J.W.; Haan, J.F. de; Vassen, W.; Zande, W.J. van der; Waters, L.B.F.M.

    2005-01-01

    We present a new extensive database containing experimental scattering matrix elements as functions of the scattering angle measured at 441.6 and 632.8 nm for a large collection of micron-sized mineral particles in random orientation. This unique database is accessible through the World-Wide Web. Size distribution tables of the particles are also provided, as well as other characteristics relevant to light scattering. The database provides the light scattering community with easily accessible information that is useful, for a variety of applications such as testing theoretical methods, and the interpretation of measurements of scattered radiation. To illustrate the use of the database, we consider cometary observations and compare them with (1) cometary analog data from the database, and (2) with results of Mie calculations for homogeneous spheres, having the same refractive index and size distribution as those of the analog data

  15. Lot-Order Assignment Applying Priority Rules for the Single-Machine Total Tardiness Scheduling with Nonnegative Time-Dependent Processing Times

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2015-01-01

    Full Text Available Lot-order assignment is to assign items in lots being processed to orders to fulfill the orders. It is usually performed periodically for meeting the due dates of orders especially in a manufacturing industry with a long production cycle time such as the semiconductor manufacturing industry. In this paper, we consider the lot-order assignment problem (LOAP with the objective of minimizing the total tardiness of the orders with distinct due dates. We show that we can solve the LOAP optimally by finding an optimal sequence for the single-machine total tardiness scheduling problem with nonnegative time-dependent processing times (SMTTSP-NNTDPT. Also, we address how the priority rules for the SMTTSP can be modified to those for the SMTTSP-NNTDPT to solve the LOAP. In computational experiments, we discuss the performances of the suggested priority rules and show the result of the proposed approach outperforms that of the commercial optimization software package.

  16. Process evaluation distributed system

    Science.gov (United States)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  17. The Determination of Production and Distribution Policy in Push-Pull Production Chain with Supply Hub as the Junction Point

    Science.gov (United States)

    Sinaga, A. T.; Wangsaputra, R.

    2018-03-01

    The development of technology causes the needs of products and services become increasingly complex, diverse, and fluctuating. This causes the level of inter-company dependencies within a production chains increased. To be able to compete, efficiency improvements need to be done collaboratively in the production chain network. One of the efforts to increase efficiency is to harmonize production and distribution activities in the production chain network. This paper describes the harmonization of production and distribution activities by applying the use of push-pull system and supply hub in the production chain between two companies. The research methodology begins with conducting empirical and literature studies, formulating research questions, developing mathematical models, conducting trials and analyses, and taking conclusions. The relationship between the two companies is described in the MINLP mathematical model with the total cost of production chain as the objective function. Decisions generated by the mathematical models are the size of production lot, size of delivery lot, number of kanban, frequency of delivery, and the number of understock and overstock lot.

  18. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  19. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  20. Florabank1: a grid-based database on vascular plant distribution in the northern part of Belgium (Flanders and the Brussels Capital region

    Directory of Open Access Journals (Sweden)

    Wouter Van Landuyt

    2012-05-01

    Full Text Available Florabank1 is a database that contains distributional data on the wild flora (indigenous species, archeophytes and naturalised aliens of Flanders and the Brussels Capital Region. It holds about 3 million records of vascular plants, dating from 1800 till present. Furthermore, it includes ecological data on vascular plant species, redlist category information, Ellenberg values, legal status, global distribution, seed bank etc. The database is an initiative of “Flo.Wer” (www.plantenwerkgroep.be, the Research Institute for Nature and Forest (INBO: www.inbo.be and the National Botanic Garden of Belgium (www.br.fgov.be. Florabank aims at centralizing botanical distribution data gathered by both professional and amateur botanists and to make these data available to the benefit of nature conservation, policy and scientific research.The occurrence data contained in Florabank1 are extracted from checklists, literature and herbarium specimen information. Of survey lists, the locality name (verbatimLocality, species name, observation date and IFBL square code, the grid system used for plant mapping in Belgium (Van Rompaey 1943, is recorded. For records dating from the period 1972–2004 all pertinent botanical journals dealing with Belgian flora were systematically screened. Analysis of herbarium specimens in the collection of the National Botanic Garden of Belgium, the University of Ghent and the University of Liège provided interesting distribution knowledge concerning rare species, this information is also included in Florabank1. The data recorded before 1972 is available through the Belgian GBIF node (http://data.gbif.org/datasets/resource/10969/, not through FLORABANK1, to avoid duplication of information. A dedicated portal providing access to all published Belgian IFBL records at this moment is available at: http://projects.biodiversity.be/ifblAll data in Florabank1 is georeferenced. Every record holds the decimal centroid coordinates of the

  1. Negotiation-based Order Lot-Sizing Approach for Two-tier Supply Chain

    Science.gov (United States)

    Chao, Yuan; Lin, Hao Wen; Chen, Xili; Murata, Tomohiro

    This paper focuses on a negotiation based collaborative planning process for the determination of order lot-size over multi-period planning, and confined to a two-tier supply chain scenario. The aim is to study how negotiation based planning processes would be used to refine locally preferred ordering patterns, which would consequently affect the overall performance of the supply chain in terms of costs and service level. Minimal information exchanges in the form of mathematical models are suggested to represent the local preferences and used to support the negotiation processes.

  2. Hydrologic and Pollutant Removal Performance of a Full-Scale, Fully Functional Permeable Pavement Parking Lot

    Science.gov (United States)

    In accordance with the need for full-scale, replicated studies of permeable pavement systems used in their intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage conditions, and maintenance regimes to evaluate these systems, the EPA’s Urb...

  3. An efficient computational method for a stochastic dynamic lot-sizing problem under service-level constraints

    NARCIS (Netherlands)

    Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.

    2011-01-01

    We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method

  4. ADILE: Architecture of a database-supported learning environment

    NARCIS (Netherlands)

    Hiddink, G.W.

    2001-01-01

    This article proposes an architecture for distributed learning environments that use databases to store learning material. As the layout of learning material can inhibit reuse, the ar-chitecture implements the notion of "separation of layout and structure" using XML technology. Also, the

  5. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  6. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  7. An integrated supply chain inventory model with imperfect-quality items, controllable lead time and distribution-free demand

    Directory of Open Access Journals (Sweden)

    Lin Hsien-Jen

    2013-01-01

    Full Text Available In this paper, we consider an integrated vendor-buyer inventory policy for a continuous review model with a random number of defective items and screening process gradually at a fixed screening rate in buyer’s arriving order lot. We assume that shortages are allowed and partially backlogged on the buyer’s side, and that the lead time demand distribution is unknown, except its first two moments. The objective is to apply the minmax distribution free approach to determine the optimal order quantity, reorder point, lead time and the number of lots delivered in one production run simultaneously so that the expected total system cost is minimized. Numerical experiments along with sensitivity analysis were performed to illustrate the effects of parameters on the decision and the total system cost.

  8. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  9. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  10. Study on establishing database system for marine geological and geophysical data 1

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyun Chul; Bahng, Hyo Ky; Lee, Chi Won; Kang, Jung Seock; Chang, Se Won; Lee, Ho Young [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of); Park, Soo Chul [Chungnam National University, Taejon (Korea, Republic of)

    1995-12-01

    Most marine geological and geophysical data collected from the institutes of Korea are an analogue type, and stored in several data acquisition institutes according to their own data management systems and input formats. Thus, if someone wants to use the data, he must visit institute(s) to collect the data. It is also necessary to manipulate the collected data based on the user`s own data management system because data input formats differ from each institute. Consequently, it requires lots of time to do searching, managing and analyzing the data. The purpose of the study, therefore, is to establish database system for the standardization of the data input formats and to develop the data output conversion software for the commonly used database management system. Marine geological and geophysical data input formats are set up through the detailed analyses for the input formats used in the domestic as well as foreign countries. PC-based output conversion software for the bathymetry, gravity and magnetic data is also developed. Thus, if all institutes use the data input formats introduced in this study, it is possible to minimize the redundancy, to keep the consistency, and to make a standardization of data. (author). 6 refs., 22 figs., 10 tabs.

  11. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  12. East-China Geochemistry Database (ECGD):A New Networking Database for North China Craton

    Science.gov (United States)

    Wang, X.; Ma, W.

    2010-12-01

    North China Craton is one of the best natural laboratories that research some Earth Dynamic questions[1]. Scientists made much progress in research on this area, and got vast geochemistry data, which are essential for answering many fundamental questions about the age, composition, structure, and evolution of the East China area. But the geochemical data have long been accessible only through the scientific literature and theses where they have been widely dispersed, making it difficult for the broad Geosciences community to find, access and efficiently use the full range of available data[2]. How to effectively store, manage, share and reuse the existing geochemical data in the North China Craton area? East-China Geochemistry Database(ECGD) is a networking geochemical scientific database system that has been designed based on WebGIS and relational database for the structured storage and retrieval of geochemical data and geological map information. It is integrated the functions of data retrieval, spatial visualization and online analysis. ECGD focus on three areas: 1.Storage and retrieval of geochemical data and geological map information. Research on the characters of geochemical data, including its composing and connecting of each other, we designed a relational database, which based on geochemical relational data model, to store a variety of geological sample information such as sampling locality, age, sample characteristics, reference, major elements, rare earth elements, trace elements and isotope system et al. And a web-based user-friendly interface is provided for constructing queries. 2.Data view. ECGD is committed to online data visualization by different ways, especially to view data in digital map with dynamic way. Because ECGD was integrated WebGIS technology, the query results can be mapped on digital map, which can be zoomed, translation and dot selection. Besides of view and output query results data by html, txt or xls formats, researchers also can

  13. MIPS: a database for protein sequences, homology data and yeast genome information.

    Science.gov (United States)

    Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F

    1997-01-01

    The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498

  14. Exponential Smoothing for Multi-Product Lot-Sizing With Heijunka and Varying Demand

    OpenAIRE

    Grimaud Frédéric; Dolgui Alexandre; Korytkowski Przemyslaw

    2014-01-01

    Here we discuss a multi-product lot-sizing problem for a job shop controlled with a heijunka box. Demand is considered as a random variable with constant variation which must be absorbed somehow by the manufacturing system, either by increased inventory or by flexibility in the production. When a heijunka concept (production leveling) is used, fluctuations in customer orders are not transferred directly to the manufacturing system allowing for a smoother production and better production capac...

  15. Exploring Trajectories of Distributed Development

    DEFF Research Database (Denmark)

    Slepniov, Dmitrij; Wæhrens, Brian Vejrum; Niang, Mohamed

    2014-01-01

    While some firms have successfully turned their global operations into a formidable source of competitive advantage, others have failed to do so. A lot depends on which activities are globally distributed and how they are configured and coordinated. Emerging body of literature and practice suggest...... that not only standardized manufacturing tasks, but also knowledge-intensive and proprietary activities, including research and development (R&D), are increasingly subject to global dispersion. The purpose of this chapter is to explore structural and infrastructural arrangements that take place in industrial...... firms as they globally disperse their development activities. The study employs qualitative methodology and on the basis of two case studies of Danish firms it highlights the challenges of distributed development as well as how these challenges can be dealt with. The chapter outlines a variety...

  16. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  17. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  18. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  19. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  20. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  1. National Radiobiology Archives Distributed Access user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Watson, C.; Smith, S. (Pacific Northwest Lab., Richland, WA (United States)); Prather, J. (Linfield Coll., McMinnville, OR (United States))

    1991-11-01

    This User's Manual describes installation and use of the National Radiobiology Archives (NRA) Distributed Access package. The package consists of a distributed subset of information representative of the NRA databases and database access software which provide an introduction to the scope and style of the NRA Information Systems.

  2. Validity and representativeness of the "Disease Analyzer" patient database for use in pharmacoepidemiological and pharmacoeconomic studies.

    Science.gov (United States)

    Becher, H; Kostev, K; Schröder-Bernhardi, D

    2009-10-01

    Patient and health care databases are available in many countries. These are often based on routinely collected diagnosis and prescription data. Various research questions, such as those related to pharmacoepidemiological health services or drug supply, can be evaluated on the basis of these databases. In Germany, the Disease Analyzer patient database is the largest database of its kind. Using various validity criteria, the representativeness of this database is examined with respect to variables relevant to pharmacoepidemiological and pharmacoeconomic studies. The Disease Analyzer patient database contains data on diagnoses, prescriptions, risk factors (such as smoking and obesity), and laboratory values for approximately 10 million patients from Germany, the UK, France, and Austria. The database also contains data from various groups of specialist physicians as well as from general practitioners and specialists for internal medicine. Data from physicians' practices in Germany form the basis of this investigation. To check the validity and representativeness of the data, the distributions of several variables are analyzed. These variables refer partly to the physicians' practices participating in the study and partly to the patients in these practices. The factors observed include prescriptions for generic drugs, the distribution of diagnostic groups among participating physicians' practices, the distribution of patients according to health insurance fund, the most frequent products, the distribution of package sizes prescribed, and the age structure of patients with various incident cancer diagnoses. These factors were compared with available reference statistics. The sampling methods for the selection of physicians' practices appear to be appropriate. Prescription statistics for several drugs were very similar to available data from the pharmaceutical prescriptions report (Arzneimittelverordnungsreport). The age structures for given diagnoses in Disease Analyzer

  3. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  4. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  5. Application of the International Union of Radioecologists soil-to-plant database to Canadian settings.

    Energy Technology Data Exchange (ETDEWEB)

    Sheppard, S C [Atomic Energy of Canada Ltd., Pinawa, MB (Canada). Whiteshell Labs.

    1995-12-01

    The International Union of Radioecologists (IUR) has compiled a very large database of soil-to-plant transfer factors. These factors are ratios of the radionuclide concentrations in dry plants divided by the corresponding concentrations in dry soil to a specified depth or thickness. In this report the factors are called CR values, for concentration ratio. The CR values are empirical and are considered element-specific. The IUR database has a lot of data for Cs, Sr, Co, Pu and Np, and contains records for Am, Ce, Cm, I, La, Mn, Ni, Pb, Po, Ra, Ru, Sb, Tc, Th, U and Zn. Where there was a large amount of data, interpolation for ranges of soil conditions was possible. The tables presented here summarize the data in a way that should be immediately useful to modellers. Values are averaged for a number of crop types and species. Correction factors are developed to facilitate interpolation among soil conditions. The data tables in this report do not substitute for site-specific measurements, but they will provide data where measurement is impossible and give a background to check more recent data. (author) 4 refs ., 48 tabs.

  6. The development of an advanced information management system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    Performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. KAERI is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective of AIMS development is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. We designed the PSA information database system for the following purposes: integrated PSA information management software, sensitivity analysis, quality assurance, anchor to another reliability database. The AIMS consists of a PSA Information database, Information browsing (searching) modules, and PSA automatic quantification manager modules

  7. The development of an advanced information management system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    Performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. KAERI is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective of AIMS development is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. We designed the PSA information database system for the following purposes: integrated PSA information management software, sensitivity analysis, quality assurance, anchor to another reliability database. The AIMS consists of a PSA Information database, Information browsing (searching) modules, and PSA automatic quantification manager modules.

  8. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  9. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  10. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    Science.gov (United States)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  11. Development of an Engineering Soil Database

    Science.gov (United States)

    2017-12-27

    ER D C TR 1 7- 15 Rapid Airfield Damage Recovery (RADR) Program Development of an Engineering Soil Database En gi ne er R es ea rc...distribution is unlimited. The US Army Engineer Research and Development Center (ERDC) solves the nation’s toughest engineering and environmental...challenges. ERDC develops innovative solutions in civil and military engineering , geospatial sciences, water resources, and environmental sciences

  12. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  13. LSD: Large Survey Database framework

    Science.gov (United States)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  14. Measurements on cation exchange capacity of bentonite in the long-term test of buffer material (LOT)

    International Nuclear Information System (INIS)

    Muurinen, A.

    2011-01-01

    Determination of cation exchange capacity (CEC) of bentonite in the LOT experiment was the topic of this study. The measurements were performed using the complex of copper(II) ion with trietylenetetramine [Cu(trien)] 2+ as the index cation. Testing of the determination method suggested that (i) drying and wetting of the bentonite, and (ii) exchange time affect the obtained result. The real CEC measurements were carried out with the bentonite samples taken from the A2 parcel of the LOT experiment. The CEC values of the LOT samples were compared with those of the reference samples taken from the same bentonite batch before the compaction of the blocks for the experiment. The conclusions drawn have been made on the basis of the results determined with the wet bentonite samples using the direct exchange of two weeks with 0.01 M [Cu(trien)] 2+ solution because this method gave the most complete cation exchange in the CEC measurements. The differences between the samples taken from different places of the A2 parcel were quite small and close to the accuracy of the method. However, it seems that the CEC values of the field experiment are somewhat higher than the CEC of the reference samples and the values of the hot area are higher than those obtained from the low temperature area. It is also obvious that the variation of CEC increases with increasing temperature. (orig.)

  15. Database application research in real-time data access of accelerator control system

    International Nuclear Information System (INIS)

    Chen Guanghua; Chen Jianfeng; Wan Tianmin

    2012-01-01

    The control system of Shanghai Synchrotron Radiation Facility (SSRF) is a large-scale distributed real-time control system, It involves many types and large amounts of real-time data access during the operating. Database system has wide application prospects in the large-scale accelerator control system. It is the future development direction of the accelerator control system, to replace the differently dedicated data structures with the mature standardized database system. This article discusses the application feasibility of database system in accelerators based on the database interface technology, real-time data access testing, and system optimization research and to establish the foundation of the wide scale application of database system in the SSRF accelerator control system. Based on the database interface technology, real-time data access testing and system optimization research, this article will introduce the application feasibility of database system in accelerators, and lay the foundation of database system application in the SSRF accelerator control system. (authors)

  16. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  17. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  18. Making Marble Tracks Can Involve Lots of Fun as Well as STEM Learning

    Science.gov (United States)

    Nagel, Bert

    2015-01-01

    Marble tracks are a very popular toy and big ones can be found in science centres in many countries. If children want to make a marble track themselves it is quite a job. It takes a long time, they can take up a lot of space and most structures are quite fragile, as the materials used can very quickly prove unfit for the task and do not last very…

  19. Optimal Lot Sizing with Scrap and Random Breakdown Occurring in Backorder Replenishing Period

    OpenAIRE

    Ting, Chia-Kuan; Chiu, Yuan-Shyi; Chan, Chu-Chai

    2011-01-01

    This paper is concerned with determination of optimal lot size for an economic production quantity model with scrap and random breakdown occurring in backorder replenishing period. In most real-life manufacturing systems, generation of defective items and random breakdown of production equipment are inevitable. To deal with the stochastic machine failures, production planners practically calculate the mean time between failures (MTBF) and establish the robust plan accordingly, in terms of opt...

  20. SUSTAINABILITY OF ECONOMIC GROWTH AND INEQUALITY IN INCOMES DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Bogdan Ion Boldea

    2012-07-01

    Full Text Available The problem of inequality in incomes distribution is a present one, much discussed. Economic growth is considered an essential force to reduce the level of poverty by increasing the labor demand and finally the wages within the economy. But the extent to which poverty is reduced as a result of economic growth depends mostly on the initial inequalities in income and on how the distribution of income changes with economic growth. A lot of researches are focused on studying the evolution of inequality in incomes distribution and others have attempted to explore the relationship between income inequality and economic growth. There are also studies which try to identify the main factors which have impact on inequality in incomes distribution. The objective of this study is to put in discussion another possible factor that affects the variability on inequality of incomes distribution – economic growth variability. As background research, until now, we did not find any studies which are investigating this possible relation between inequality of incomes distribution and economic growth variability. To provide some empirical evidences for a positive impact of social output volatility on inequality of incomes’ distribution we are involving a small sample of 27 developing countries for an observation time span between 1995 and 2006. The values of the Gini coefficient reported in World Income Inequality Database are used as dependent variable. As a first step in testing our research hypothesis, we are involving a static panel data model with pooled ordinary least squares (OLS, fixed effects (FE and random effects (RE estimators. The F statistics tests the null hypothesis of same specific effects for all countries. If we accept the null hypothesis, we could use the OLS estimator. The Hausman test can decide which model is better: random effects (RE versus fixed effects (FE. The FE model was selected because it avoids the inconsistency due to

  1. Effects of ground surface decontamination on the air radiation dose rate. Results of a decontamination trial at a playground lot in a Fukushima residential area

    International Nuclear Information System (INIS)

    Tagawa, Akihiro

    2012-01-01

    The Japan Atomic Energy Agency decontaminated schools, playgrounds, swimming pools, and houses in nonevacuated, less-contaminated areas in Fukushima for environmental restoration. A small, 150 m 2 playground lot in the residential area was chosen for decontamination demonstration, which used routinely available tools and commodities to carry out the work. The surfaces of playground lot equipment, such as swings, slides, and horizontal iron bars, were completely decontaminated by brushing with water and/or detergent. Side gutters around the playground lot were cleaned by removing the mud and then brushed and washed with a high-pressure water jet (7 MPa). The air dose rate at the playground lot was dominated by radiation from the ground surface and adjacent surroundings, such as apartments and rice fields. Two or three centimeters of the surface soil contaminated with cesium was removed manually with shovels, hoes, and other gardening tools. This significantly reduced the average air dose rate of the entire playground lot from 1.5 μSv/h before decontamination to 0.6 μSv/h. These results showed that ground surface decontamination can contribute measurably to the reduction in air dose rate in relatively small areas in residential areas. (author)

  2. Where is my car? Examining wayfinding behavior in a parking lot

    Directory of Open Access Journals (Sweden)

    Rodrigo Mora

    2014-08-01

    Full Text Available This article examines wayfinding behavior in an extended parking lot belonging to one of the largest shopping malls in Santiago, Chile. About 500 people were followed while going to the mall and returning from it, and their trajectories were mapped and analyzed. The results indicate that inbound paths were, in average, 10% shorter that outbound paths, and that people stopped three times more frequently when leaving the mall than when accessing it. It is argued that these results are in line with previous research on the subject, which stress the importance of environmental information in shaping people`s behavior.

  3. HIM-herbal ingredients in-vivo metabolism database.

    Science.gov (United States)

    Kang, Hong; Tang, Kailin; Liu, Qi; Sun, Yi; Huang, Qi; Zhu, Ruixin; Gao, Jun; Zhang, Duanfeng; Huang, Chenggang; Cao, Zhiwei

    2013-05-31

    Herbal medicine has long been viewed as a valuable asset for potential new drug discovery and herbal ingredients' metabolites, especially the in vivo metabolites were often found to gain better pharmacological, pharmacokinetic and even better safety profiles compared to their parent compounds. However, these herbal metabolite information is still scattered and waiting to be collected. HIM database manually collected so far the most comprehensive available in-vivo metabolism information for herbal active ingredients, as well as their corresponding bioactivity, organs and/or tissues distribution, toxicity, ADME and the clinical research profile. Currently HIM contains 361 ingredients and 1104 corresponding in-vivo metabolites from 673 reputable herbs. Tools of structural similarity, substructure search and Lipinski's Rule of Five are also provided. Various links were made to PubChem, PubMed, TCM-ID (Traditional Chinese Medicine Information database) and HIT (Herbal ingredients' targets databases). A curated database HIM is set up for the in vivo metabolites information of the active ingredients for Chinese herbs, together with their corresponding bioactivity, toxicity and ADME profile. HIM is freely accessible to academic researchers at http://www.bioinformatics.org.cn/.

  4. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya.

    Directory of Open Access Journals (Sweden)

    Julia Jezmir

    Full Text Available To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS methodology.The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance.This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains.Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints.

  5. Use of Lot Quality Assurance Sampling to Ascertain Levels of Drug Resistant Tuberculosis in Western Kenya.

    Science.gov (United States)

    Jezmir, Julia; Cohen, Ted; Zignol, Matteo; Nyakan, Edwin; Hedt-Gauthier, Bethany L; Gardner, Adrian; Kamle, Lydia; Injera, Wilfred; Carter, E Jane

    2016-01-01

    To classify the prevalence of multi-drug resistant tuberculosis (MDR-TB) in two different geographic settings in western Kenya using the Lot Quality Assurance Sampling (LQAS) methodology. The prevalence of drug resistance was classified among treatment-naïve smear positive TB patients in two settings, one rural and one urban. These regions were classified as having high or low prevalence of MDR-TB according to a static, two-way LQAS sampling plan selected to classify high resistance regions at greater than 5% resistance and low resistance regions at less than 1% resistance. This study classified both the urban and rural settings as having low levels of TB drug resistance. Out of the 105 patients screened in each setting, two patients were diagnosed with MDR-TB in the urban setting and one patient was diagnosed with MDR-TB in the rural setting. An additional 27 patients were diagnosed with a variety of mono- and poly- resistant strains. Further drug resistance surveillance using LQAS may help identify the levels and geographical distribution of drug resistance in Kenya and may have applications in other countries in the African Region facing similar resource constraints.

  6. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  7. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of bangalore city using cluster sampling and lot quality assurance sampling techniques.

    Science.gov (United States)

    K, Punith; K, Lalitha; G, Suman; Bs, Pradeep; Kumar K, Jayanth

    2008-07-01

    Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Population-based cross-sectional study. Areas under Mathikere Urban Health Center. Children aged 12 months to 23 months. 220 in cluster sampling, 76 in lot quality assurance sampling. Percentages and Proportions, Chi square Test. (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  8. Atlas of Iberian water beetles (ESACIB database).

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A; Ribera, Ignacio

    2015-01-01

    The ESACIB ('EScarabajos ACuáticos IBéricos') database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the "Atlas de los Coleópteros Acuáticos de España Peninsular". In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format.

  9. Atlas of Iberian water beetles (ESACIB database)

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A.; Ribera, Ignacio

    2015-01-01

    Abstract The ESACIB (‘EScarabajos ACuáticos IBéricos’) database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the “Atlas de los Coleópteros Acuáticos de España Peninsular”. In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format. PMID:26448717

  10. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  11. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  12. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  13. Heritage plaza parking lots improvement project- Solar PV installation

    Energy Technology Data Exchange (ETDEWEB)

    Hooks, Todd [Agua Caliente Indian Reservation, Palm Springs, CA (United States)

    2017-03-31

    The Agua Caliente Band of Cahuilla Indians (ACBCI or the “Tribe”) installed a 79.95 kW solar photovoltaic (PV) system to offset the energy usage costs of the Tribal Education and Family Services offices located at the Tribe's Heritage Plaza office building, 90I Tahquitz Way, Palm Springs, CA, 92262 (the "Project"). The installation of the Solar PV system was part of the larger Heritage Plaza Parking Lot Improvements Project and mounted on the two southern carport shade structures. The solar PV system will offset 99% of the approximately 115,000 kWh in electricity delivered annually by Southern California Edison (SCE) to the Tribal Education and Family Services offices at Heritage Plaza, reducing their annual energy costs from approximately $22,000 annually to approximately $200. The total cost of the proposed solar PV system is $240,000.

  14. Danish Colorectal Cancer Group Database.

    Science.gov (United States)

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long

  15. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    International Nuclear Information System (INIS)

    Nagasaki, S.

    2013-01-01

    Based on the sorption distribution coefficients (K d ) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K d values of Cs onto Mizunami granite carried out by JAEA with the K d values predicted by the model, the effect of the ionic strength on the K d values of Cs onto granite was evaluated. It was found that K d values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10 -2 to 5 x 10 -1 mol/dm 3 . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  16. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): A database of permitted public-supply wells, surface-water intakes, and systems in the United States

    Science.gov (United States)

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The U.S. Geological Survey (USGS) has developed a database containing information about wells, surface-water intakes, and distribution systems that are part of public water systems across the United States, its territories, and possessions. Programs of the USGS such as the National Water Census, the National Water Use Information Program, and the National Water-Quality Assessment Program all require a complete and current inventory of public water systems, the sources of water used by those systems, and the size of populations served by the systems across the Nation. Although the U.S. Environmental Protection Agency’s Safe Drinking Water Information System (SDWIS) database already exists as the primary national Federal database for information on public water systems, the Public-Supply Database (PSDB) was developed to add value to SDWIS data with enhanced location and ancillary information, and to provide links to other databases, including the USGS’s National Water Information System (NWIS) database.

  17. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  18. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  19. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  20. Armada: a reference model for an evolving database system

    NARCIS (Netherlands)

    F.E. Groffen (Fabian); M.L. Kersten (Martin); S. Manegold (Stefan)

    2006-01-01

    textabstractThe current database deployment palette ranges from networked sensor-based devices to large data/compute Grids. Both extremes present common challenges for distributed DBMS technology. The local storage per device/node/site is severely limited compared to the total data volume being

  1. Imprecision and Uncertainty in the UFO Database Model.

    Science.gov (United States)

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  2. Root Systems of Individual Plants, and the Biotic and Abiotic Factors Controlling Their Depth and Distribution: a Synthesis Using a Global Database.

    Science.gov (United States)

    Tumber-Davila, S. J.; Schenk, H. J.; Jackson, R. B.

    2017-12-01

    This synthesis examines plant rooting distributions globally, by doubling the number of entries in the Root Systems of Individual Plants database (RSIP) created by Schenk and Jackson. Root systems influence many processes, including water and nutrient uptake and soil carbon storage. Root systems also mediate vegetation responses to changing climatic and environmental conditions. Therefore, a collective understanding of the importance of rooting systems to carbon sequestration, soil characteristics, hydrology, and climate, is needed. Current global models are limited by a poor understanding of the mechanisms affecting rooting, carbon stocks, and belowground biomass. This improved database contains an extensive bank of records describing the rooting system of individual plants, as well as detailed information on the climate and environment from which the observations are made. The expanded RSIP database will: 1) increase our understanding of rooting depths, lateral root spreads and above and belowground allometry; 2) improve the representation of plant rooting systems in Earth System Models; 3) enable studies of how climate change will alter and interact with plant species and functional groups in the future. We further focus on how plant rooting behavior responds to variations in climate and the environment, and create a model that can predict rooting behavior given a set of environmental conditions. Preliminary results suggest that high potential evapotranspiration and seasonality of precipitation are indicative of deeper rooting after accounting for plant growth form. When mapping predicted deep rooting by climate, we predict deepest rooting to occur in equatorial South America, Africa, and central India.

  3. The Copenhagen primary care differential count (CopDiff) database

    DEFF Research Database (Denmark)

    Andersen, Christen Bertel L; Siersma, V.; Karlslund, W.

    2014-01-01

    BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore assoc...... the construction of the Copenhagen Primary Care Differential Count database as well as the distribution of characteristics of the population it covers and the variables that are recorded. Finally, it gives examples of its use as an inspiration to peers for collaboration.......BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore...... Practitioners' Laboratory has registered all analytical results since July 1, 2000. The Copenhagen Primary Care Differential Count database contains all differential blood cell count results (n=1,308,022) from July 1, 2000 to January 25, 2010 requested by general practitioners, along with results from analysis...

  4. Verification of road databases using multiple road models

    Science.gov (United States)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  5. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  6. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  7. RefDB: The Reference Database for CMS Monte Carlo Production

    CERN Document Server

    Lefébure, V

    2003-01-01

    RefDB is the CMS Monte Carlo Reference Database. It is used for recording and managing all details of physics simulation, reconstruction and analysis requests, for coordinating task assignments to world-wide distributed Regional Centers, Grid-enabled or not, and trace their progress rate. RefDB is also the central database that the workflow-planner contacts in order to get task instructions. It is automatically and asynchronously updated with book-keeping run summaries. Finally it is the end-user interface to data catalogues.

  8. Catalog of databases and reports

    International Nuclear Information System (INIS)

    Burtis, M.D.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy's (DOE's) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications

  9. Catalog of databases and reports

    Energy Technology Data Exchange (ETDEWEB)

    Burtis, M.D. [comp.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy`s (DOE`s) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications.

  10. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  11. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  12. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  13. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  14. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  15. Note on "An efficient approach for solving the lot-sizing problem with time-varying storage capacities"

    NARCIS (Netherlands)

    W. van den Heuvel (Wilco); J.M. Gutierrez (Jose Miguel); H.C. Hwang (Hark-Chin)

    2011-01-01

    textabstractIn a recent paper Gutierrez et al. (2008) show that the lot-sizing problem with inventory bounds can be solved in O(T log T) time. In this note we show that their algorithm does not lead to an optimal solution in general.

  16. TERRITORIALISATION DU RISQUE SANITAIRE : : Les "immeubles tuberculeux" de l' îlot insalubre Saint Gervais(1894-1930)

    OpenAIRE

    Fijalkow , Yankel

    1996-01-01

    International audience; Etude statistique sur la réalité de la dynamique de contagion tuberculeuse dans l'îlot insalubre n° 16 de Paris. Conjonction des données statistiques avec les volontés politiques de la municipalité parisienne avant 1945.

  17. Hydrologic and Pollutant Removal Performance of a Full-Scale, Fully Functional Permeable Pavement Parking Lot - paper

    Science.gov (United States)

    To meet the need for long-term, full-scale, replicated studies of permeable pavement systems used in their intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage conditions, and maintenance regimes to evaluate these systems, the EPA’s Urb...

  18. Global review of health care surveys using lot quality assurance sampling (LQAS), 1984-2004.

    Science.gov (United States)

    Robertson, Susan E; Valadez, Joseph J

    2006-09-01

    We conducted a global review on the use of lot quality assurance sampling (LQAS) to assess health care services, health behaviors, and disease burden. Publications and reports on LQAS surveys were sought from Medline and five other electronic databases; the World Health Organization; the World Bank; governments, nongovernmental organizations, and individual scientists. We identified a total of 805 LQAS surveys conducted by different management groups during January 1984 through December 2004. There was a striking increase in the annual number of LQAS surveys conducted in 2000-2004 (128/year) compared with 1984-1999 (10/year). Surveys were conducted in 55 countries, and in 12 of these countries there were 10 or more LQAS surveys. Geographically, 317 surveys (39.4%) were conducted in Africa, 197 (28.5%) in the Americas, 115 (14.3%) in the Eastern Mediterranean, 114 (14.2%) in South-East Asia, 48 (6.0%) in Europe, and 14 (1.8%) in the Western Pacific. Health care parameters varied, and some surveys assessed more than one parameter. There were 320 surveys about risk factors for HIV/AIDS/sexually transmitted infections; 266 surveys on immunization coverage, 240 surveys post-disasters, 224 surveys on women's health, 142 surveys on growth and nutrition, 136 surveys on diarrheal disease control, and 88 surveys on quality management. LQAS surveys to assess disease burden included 23 neonatal tetanus mortality surveys and 12 surveys on other diseases. LQAS is a practical field method which increasingly is being applied in assessment of preventive and curative health services, and may offer new research opportunities to social scientists. When LQAS data are collected recurrently at multiple time points, they can be used to measure the spatial variation in behavior change. Such data provide insight into understanding relationships between various investments in social, human, and physical capital, and into the effectiveness of different public health strategies in achieving

  19. tmRDB (tmRNA database)

    DEFF Research Database (Denmark)

    Zwieb, Christian; Gorodkin, Jan; Knudsen, Bjarne

    2003-01-01

    Maintained at the University of Texas Health Science Center at Tyler, Texas, the tmRNA database (tmRDB) is accessible at the URL http://psyche.uthct.edu/dbs/tmRDB/tmRDB.html with mirror sites located at Auburn University, Auburn, Alabama (http://www.ag.auburn.edu/mirror/tmRDB/) and the Bioinforma......Maintained at the University of Texas Health Science Center at Tyler, Texas, the tmRNA database (tmRDB) is accessible at the URL http://psyche.uthct.edu/dbs/tmRDB/tmRDB.html with mirror sites located at Auburn University, Auburn, Alabama (http......://www.ag.auburn.edu/mirror/tmRDB/) and the Bioinformatics Research Center, Aarhus, Denmark (http://www.bioinf.au.dk/tmRDB/). The tmRDB collects and distributes information relevant to the study of tmRNA. In trans-translation, this molecule combines properties of tRNA and mRNA and binds several proteins to form the tmRNP. Related RNPs are likely...

  20. Development of knowledge base system linked to material database

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Tsuji, Hirokazu; Mashiko, Shinichi; Miyakawa, Shunichi; Fujita, Mitsutane; Kinugawa, Junichi; Iwata, Shuichi

    2002-01-01

    The distributed material database system named 'Data-Free-Way' has been developed by four organizations (the National Institute for Materials Science, the Japan Atomic Energy Research Institute, the Japan Nuclear Cycle Development Institute, and the Japan Science and Technology Corporation) under a cooperative agreement in order to share fresh and stimulating information as well as accumulated information for the development of advanced nuclear materials, for the design of structural components, etc. In order to create additional values of the system, knowledge base system, in which knowledge extracted from the material database is expressed, is planned to be developed for more effective utilization of Data-Free-Way. XML (eXtensible Markup Language) has been adopted as the description method of the retrieved results and the meaning of them. One knowledge note described with XML is stored as one knowledge which composes the knowledge base. Since this knowledge note is described with XML, the user can easily convert the display form of the table and the graph into the data format which the user usually uses. This paper describes the current status of Data-Free-Way, the description method of knowledge extracted from the material database with XML and the distributed material knowledge base system. (author)

  1. Recent developments and object-oriented approach in FTU database

    International Nuclear Information System (INIS)

    Bertocchi, A.; Bracco, G.; Buceti, G.; Centioli, C.; Iannone, F.; Manduchi, G.; Nanni, U.; Panella, M.; Stracuzzi, C.; Vitale, V.

    2001-01-01

    During the last two years, the experimental database of Frascati Tokamak Upgrade (FTU) has been changed from several points of view, particularly: (i) the data and the analysis codes have been moved from the IBM main frame to Unix platforms making enabling the users to take advantage of the large quantities of commercial and free software available under Unix (Matlab, IDL, etc); (ii) AFS (Andrew File System) has been chosen as the distributed file system making the data available on all the nodes and distributing the workload; (iii) 'One measure/one file' philosophy (vs. the previous 'one pulse/one file') has been adopted increasing the number of files into the database but, at the same time, allowing the most important data to be available just after the plasma discharge. The client-server architecture has been tested using the signal viewer client jScope. Moreover, an object oriented data model (OODM) of FTU experimental data has been tried: a generalized model in tokamak experimental data has been developed with typical concepts such as abstraction, encapsulation, inheritance, and polymorphism. The model has been integrated with data coming from different databases, building an Object Warehouse to extract, with data mining techniques, meaningful trends and patterns from huge amounts of data

  2. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of Bangalore city using cluster sampling and lot quality assurance sampling techniques

    Directory of Open Access Journals (Sweden)

    Punith K

    2008-01-01

    Full Text Available Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1 Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2 Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  3. The design and development of RDGSM isotope database based on J2ME/J2EE

    International Nuclear Information System (INIS)

    Zhou Shumin; Gao Yongping; Sun Yamin

    2006-01-01

    RDGSM (the Regional Database for Geothermal Surface Manifestation) is a distributed database designed for IAEA which used to manage isotope and hydrology data in asian-pacific region. The data structure of RDGSM is introduced in the paper. An mobile database structure based on J2ME/J2EE is proposed. The design and development of RDGSM isotope database is demonstrated in detail. The data synchronization method, Model-View-Controller design model and data integrity constraint method are discussed to implement mobile database application. (authors)

  4. An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network

    OpenAIRE

    Amit Chougule; Pravin Ghewari

    2011-01-01

    With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...

  5. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1989-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application program setup and data records that are used in analysis. By creating a mapping of each elation in the database to a C record and providing general tools for relation (record) across, all the data in the database is available in a natural fashion (in structures) to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase elations and the C structure is detailed with examples of C typedefs and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. 1 ref., 2 tabs

  6. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application-program setup and data records that are used in analysis. By creating a mapping of each relation in the database to a C record and providing general tools for relation (record) access, all the data in the database is available in a natural fashion to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase relations and the C structure is detailed with examples of C 'typedefs' and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. (orig.)

  7. A Technical Approach on Large Data Distributed Over a Network

    Directory of Open Access Journals (Sweden)

    Suhasini G

    2011-12-01

    Full Text Available Data mining is nontrivial extraction of implicit, previously unknown and potential useful information from the data. For a database with number of records and for a set of classes such that each record belongs to one of the given classes, the problem of classification is to decide the class to which the given record belongs. The classification problem is also to generate a model for each class from given data set. We are going to make use of supervised classification in which we have training dataset of record, and for each record the class to which it belongs is known. There are many approaches to supervised classification. Decision tree is attractive in data mining environment as they represent rules. Rules can readily expressed in natural languages and they can be even mapped o database access languages. Now a days classification based on decision trees is one of the important problems in data mining   which has applications in many areas.  Now a days database system have become highly distributed, and we are using many paradigms. we consider the problem of inducing decision trees in a large distributed network of highly distributed databases. The classification based on decision tree can be done on the existence of distributed databases in healthcare and in bioinformatics, human computer interaction and by the view that these databases are soon to contain large amounts of data, characterized by its high dimensionality. Current decision tree algorithms would require high communication bandwidth, memory, and they are less efficient and scalability reduces when executed on such large volume of data. So there are some approaches being developed to improve the scalability and even approaches to analyse the data distributed over a network.[keywords: Data mining, Decision tree, decision tree induction, distributed data, classification

  8. Database architectures for Space Telescope Science Institute

    Science.gov (United States)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  9. Stability measures for rolling schedules with applications to capacity expansion planning, master production scheduling, and lot sizing

    OpenAIRE

    Kimms, Alf

    1996-01-01

    This contribution discusses the measurement of (in-)stability of finite horizon production planning when done on a rolling horizon basis. As examples we review strategic capacity expansion planning, tactical master production schedulng, and operational capacitated lot sizing.

  10. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    Info, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  11. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  12. Enabling distributed petascale science

    International Nuclear Information System (INIS)

    Baranovski, Andrew; Bharathi, Shishir; Bresnahan, John

    2007-01-01

    Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science

  13. Joining Distributed Complex Objects: Definition and Performance

    NARCIS (Netherlands)

    Teeuw, W.B.; Teeuw, Wouter B.; Blanken, Henk

    1992-01-01

    The performance of a non-standard distributed database system is strongly ifluenced by complex objects. The effective exploitation of parallelism in querying them and a suitable structure to store them are required in order to obtain acceptable response times in these database environments where

  14. Note on "An efficient approach for solving the lot-sizing problem with time-varying storage capacities"

    NARCIS (Netherlands)

    W.J. van den Heuvel; J.M. Gutierrez (Jose Miguel); H.C. Hwang (Hark-Chin)

    2010-01-01

    textabstractIn a recent paper Gutiérrez et al. (2008) show that the lot-sizing problem with inventory bounds can be solved in O(T log T) time. In this note we show that their algorithm does not lead to an optimal solution in general.

  15. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  16. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  17. Construction of crystal structure prototype database: methods and applications.

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  18. Construction of crystal structure prototype database: methods and applications

    International Nuclear Information System (INIS)

    Su, Chuanxun; Lv, Jian; Wang, Hui; Wang, Yanchao; Ma, Yanming; Li, Quan; Zhang, Lijun

    2017-01-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery. (paper)

  19. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  20. DOMe: A deduplication optimization method for the NewSQL database backups.

    Directory of Open Access Journals (Sweden)

    Longxiang Wang

    Full Text Available Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1 DOMe can reduce the duplicated NewSQL backup data. 2 DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3 DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method.