WorldWideScience

Sample records for distributed database system

  1. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  2. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  3. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  4. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  5. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  6. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    Science.gov (United States)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  7. A Propose Model For Distributed Database System On Academic ...

    African Journals Online (AJOL)

    This paper takes a look at distributed database systems and its implementation and suitability to the academic environment of Nigeria tertiary institutions. It also takes cognizance of network operating system since the implementation of distributed database system highly depends upon computer networks. A simplified ...

  8. Interface between astrophysical datasets and distributed database management systems (DAVID)

    Science.gov (United States)

    Iyengar, S. S.

    1988-01-01

    This is a status report on the progress of the DAVID (Distributed Access View Integrated Database Management System) project being carried out at Louisiana State University, Baton Rouge, Louisiana. The objective is to implement an interface between Astrophysical datasets and DAVID. Discussed are design details and implementation specifics between DAVID and astrophysical datasets.

  9. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  10. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  11. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  12. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  13. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  14. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  15. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  16. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  17. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  18. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  19. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  20. A distributed database system for sharing geological information using free and open source software

    Science.gov (United States)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.; Raghavan, V.

    2013-12-01

    Recently, geological information, such as borehole data and geological maps, and seismic, volcanic or landslide hazard maps are published on the Internet by the national government, local governments, and research institutes in Japan. Most web systems that deliver such geological information consist of a centralized database, which are located and maintained in one location. It is easier to manage the centralized database system because all data resides in a single location. However, if the database breaks, the web service will not be available. In the present study, a distributed database system has been developed to continue delivering geological information even if a database breaks. The distributed database system has an advantage that the system remains available although an individual database is down. All the software used to construct the system is free and open source software. PostgreSQL and pgpool-II are utilized to construct a distributed database. PostgreSQL is a powerful relational database management system. Pgpool-II has a function for management of multiple PostgreSQL servers. OpenLayers is used for the web map clients. Replication and Parallel query modes with pgpool-II are utilized for distribution of a database. It is possible to create a real-time backup on 2 or more PostgreSQL clusters by replication mode. If a database breaks, the backup database will works to continue delivering geological information. Data can be split among multiple servers by using parallel query mode. The rules to send partitioned data to an appropriate cluster are contained in the System Database. If large-scale data is searched, the overall execution time will be reduced. The prototype for sharing 1500 borehole data has been successfully implemented by combination of PostgreSQL and pgpool-II on Linux server. Further development and improvement of the system are necessary to manage and analyze various spatial data in addition to borehole data. This study was supported

  1. Client-server, distributed database strategies in a healthcare record system for a homeless population.

    Science.gov (United States)

    Chueh, H C; Barnett, G O

    1993-01-01

    A computer-based healthcare record system being developed for Boston's Healthcare for the Homeless Program (BHCHP) uses client-server and distributed database technologies to enhance the delivery of healthcare to patients of this unusual population. The needs of physicians, nurses and social workers are specifically addressed in the application interface so that an integrated approach to healthcare for this population can be facilitated. These patients and their providers have unique medical information needs that are supported by both database and applications technology. To integrate the information capabilities with the actual practice of providers of care to the homeless, this computer-based record system is designed for remote and portable use over regular phone lines. An initial standalone system is being used at one major BHCHP site of care. This project describes methods for creating a secure, accessible, and scalable computer-based medical record using client-server, distributed database design.

  2. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  3. Site initialization, recovery, and back-up in a distributed database system

    International Nuclear Information System (INIS)

    Attar, R.; Bernstein, P.A.; Goodman, N.

    1982-01-01

    Site initialization is the problem of integrating a new site into a running distributed database system (DDBS). Site recovery is the problem of integrating an old site into a DDBS when the site recovers from failure. Site backup is the problem of creating a static backup copy of a database for archival or query purposes. We present an algorithm that solves the site initialization problem. By modifying the algorithm slightly, we get solutions to the other two problems as well. Our algorithm exploits the fact that a correct DDBS must run a serializable concurrency control algorithm. Our algorithm relies on the concurrency control algorithm to handle all inter-site synchronization

  4. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  5. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  6. A Multidatabase System as 4-Tiered Client-Server Distributed Heterogeneous Database System

    OpenAIRE

    Ali, Mohammad Ghulam

    2009-01-01

    In this paper, we describe a multidatabase system as 4-tiered Client-Server DBMS architectures. We discuss their functional components and provide an overview of their performance characteristics. The first component of this proposed system is a web-based interface or Graphical User Interface, which resides on top of the Client Application Program, the second component of the system is a client Application program running in an application server, which resides on top of the Global Database M...

  7. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  8. Development of a distributed database for advanced nuclear materials (data-free-way system)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Kurihara, Yutaka; Nakajima, Hajime; Yokoyama, Norio; Ueno, Fumiyoshi; Nomura, Shigeo; Iwata, Shuichi.

    1992-01-01

    The distributed database system, which is the data system for the design or selection of advanced nuclear materials, has been built under the cooperation of National Research Institute for Metals (NRIM), Japan Atomic Energy Research Institute (JAERI) and Power Reactor and Nuclear Fuel Development Corporation (PNC). The system is named 'Data-Free-Way'. The necessity of the data system are discussed and the outline of the pilot system including input data are shown. And the method of sharing data and examples of the easily accessible search of materials properties are described. Furthermore, the analysis results of tensile properties in type 316 stainless steels collected for this project are described as an example of the future trial of attractive/sophisticated utilization. (author)

  9. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  10. An optimized approach for simultaneous horizontal data fragmentation and allocation in Distributed Database Systems (DDBSs).

    Science.gov (United States)

    Amer, Ali A; Sewisy, Adel A; Elgendy, Taha M A

    2017-12-01

    With the substantial ever-upgrading advancement in data and information management field, Distributed Database System (DDBS) is still proven to be the most growingly-demanded tool to handle the accompanied constantly-piled volumes of data. However, the efficiency and adequacy of DDBS is profoundly correlated with the reliability and precision of the process in which DDBS is set to be designed. As for DDBS design, thus, several strategies have been developed, in literature, to be used in purpose of promoting DDBS performance. Off these strategies, data fragmentation, data allocation and replication, and sites clustering are the most immensely-used efficacious techniques that otherwise DDBS design and rendering would be prohibitively expensive. On one hand, an accurate well-architected data fragmentation and allocation is bound to incredibly increase data locality and promote the overall DDBS throughputs. On the other hand, finding a practical sites clustering process is set to contribute remarkably in reducing the overall Transmission Costs (TC). Consequently, consolidating all these strategies into one single work is going to undoubtedly satisfy a massive growth in DDBS influence. In this paper, therefore, an optimized heuristic horizontal fragmentation and allocation approach is meticulously developed. All the drawn-above strategies are elegantly combined into a single effective approach so as to an influential solution for DDBS productivity promotion is set to be markedly fulfilled. Most importantly, an internal and external evaluations are extensively illustrated. Obviously, findings of conducted experiments have maximally been recorded to be in favor of DDBS performance betterment.

  11. Centralized vs. Distributed Databases. Case Study

    Directory of Open Access Journals (Sweden)

    Nicoleta Magdalena Iacob

    2015-12-01

    Full Text Available Currently, in information technology domain and implicit in databases domain can be noticed two apparently contradictory approaches: centralization and distribution respectively. Although both aim to produce some benefits, it is a known fact that for any advantage a price must be paid. In addition, in this paper we have presented a case study, e-learning portal performance optimization by using distributed databases technology. In the stage of development in which institutions have branches distributed over a wide geographic area, distributed database systems become more appropriate to use, because they offer a higher degree of flexibility and adaptability then centralized ones.

  12. Role-Based Access Control for Loosely Coupled Distributed Database Management Systems

    National Research Council Canada - National Science Library

    Nygard, Greg

    2002-01-01

    .... For situations where the need exists to consolidate multiple independent databases, and where the direct integration of the databases is neither practical nor desirable, the application of RBAC...

  13. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  14. Oracle database systems administration

    OpenAIRE

    Šilhavý, Dominik

    2017-01-01

    Master's thesis with the name Oracle database systems administration describes problems in databases and how to solve them, which is important for database administrators. It helps them in delivering faster solutions without the need to look for or figure out solutions on their own. Thesis describes database backup and recovery methods that are closely related to problems solutions. The main goal is to provide guidance and recommendations regarding database troubles and how to solve them. It ...

  15. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  16. A Distributed Database System for Developing Ontological and Lexical Resources in Harmony

    NARCIS (Netherlands)

    Horák, A.; Vossen, P.T.J.M.; Rambousek, A.; Gelbukh, A.

    2010-01-01

    In this article, we present the basic ideas of creating a new information-rich lexical database of Dutch, called Cornetto, that is interconnected with corresponding English synsets and a formal ontology. The Cornetto database is based on two existing electronic dictionaries - the Referentie Bestand

  17. Client-server, distributed database strategies in a health-care record system for a homeless population.

    Science.gov (United States)

    Chueh, H C; Barnett, G O

    1994-01-01

    To design and develop a computer-based health-care record system to address the needs of the patients and providers of a homeless population. A computer-based health-care record system being developed for Boston's Healthcare for the Homeless Program (BHCHP) uses client-server technology and distributed database strategies to provide a common medical record for this transient population. The differing information requirements of physicians, nurses, and social workers are specifically addressed in the graphic application interface to facilitate an integrated approach to health care. This computer-based record system is designed for remote and portable use to integrate smoothly into the daily practice of providers of care to the homeless. The system uses remote networking technology and regular phone lines to support multiple concurrent users at remote sites of care. A stand-alone, pilot system is in operation at the BHCHP medical respite unit. Information on 129 patient encounters from 37 unique sites has been entered. A full client-server system has been designed. Benchmarks show that while the relative performance of a communication link based upon a phone line is 0.07 to 0.15 that of a local area network, optimization permits adequate response. Medical records access in a transient population poses special problems. Use of client-server and distributed database strategies can provide a technical foundation that provides a secure, reliable, and accessible computer-based medical record in this environment.

  18. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  19. Analysis of Java Distributed Architectures in Designing and Implementing a Client/Server Database System

    National Research Council Canada - National Science Library

    Akin, Ramis

    1998-01-01

    ...) is one such class for providing client/server database access. There are many different approaches in using JDBC, ranging from low level socket programming, to a more abstract middleware approach...

  20. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  1. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  2. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  3. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... of various data projection techniques is offered with the main stress on applied Principal Component Analysis. For clustering purposes, various Generalized Gaussian Mixture models are presented. Further the aggregated Markov model, which provides the cluster structure via the probabilistic decomposition...

  4. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  5. Distributed System Control

    National Research Council Canada - National Science Library

    Berea, James

    1997-01-01

    Global control in distributed systems had not been well researched. Control had only been addressed in a limited manner, such as for data-update consistency in distributed, redundant databases or for confidentiality controls...

  6. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  7. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  8. Root Systems of Individual Plants, and the Biotic and Abiotic Factors Controlling Their Depth and Distribution: a Synthesis Using a Global Database.

    Science.gov (United States)

    Tumber-Davila, S. J.; Schenk, H. J.; Jackson, R. B.

    2017-12-01

    This synthesis examines plant rooting distributions globally, by doubling the number of entries in the Root Systems of Individual Plants database (RSIP) created by Schenk and Jackson. Root systems influence many processes, including water and nutrient uptake and soil carbon storage. Root systems also mediate vegetation responses to changing climatic and environmental conditions. Therefore, a collective understanding of the importance of rooting systems to carbon sequestration, soil characteristics, hydrology, and climate, is needed. Current global models are limited by a poor understanding of the mechanisms affecting rooting, carbon stocks, and belowground biomass. This improved database contains an extensive bank of records describing the rooting system of individual plants, as well as detailed information on the climate and environment from which the observations are made. The expanded RSIP database will: 1) increase our understanding of rooting depths, lateral root spreads and above and belowground allometry; 2) improve the representation of plant rooting systems in Earth System Models; 3) enable studies of how climate change will alter and interact with plant species and functional groups in the future. We further focus on how plant rooting behavior responds to variations in climate and the environment, and create a model that can predict rooting behavior given a set of environmental conditions. Preliminary results suggest that high potential evapotranspiration and seasonality of precipitation are indicative of deeper rooting after accounting for plant growth form. When mapping predicted deep rooting by climate, we predict deepest rooting to occur in equatorial South America, Africa, and central India.

  9. MammoGrid - a prototype distributed mammographic database for Europe

    International Nuclear Information System (INIS)

    Warren, R.; Solomonides, A.E.; Del Frate, C.; Warsi, I.; Ding, J.; Odeh, M.; McClatchey, R.; Tromans, C.; Brady, M.; Highnam, R.; Cordell, M.; Estrella, F.; Bazzocchi, M.; Amendolia, S.R.

    2007-01-01

    This paper describes the prototype for a Europe-wide distributed database of mammograms entitled MammoGrid, which was developed as part of an EU-funded project. The MammoGrid database appears to the user to be a single database, but the mammograms that comprise it are in fact retained and curated in the centres that generated them. Linked to each image is a potentially large and expandable set of patient information, known as metadata. Transmission of mammograms and metadata is secure, and a data acquisition system has been developed to upload and download mammograms from the distributed database, and then annotate them, rewriting the annotations to the database. The user can be anywhere in the world, but access rights can be applied. The paper aims to raise awareness among radiologists of the potential of emerging 'grid' technology ('the second-generation Internet')

  10. Scalable Database Access Technologies for ATLAS Distributed Computing

    CERN Document Server

    Vaniachine, A

    2009-01-01

    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We pre...

  11. Process evaluation distributed system

    Science.gov (United States)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  12. Viewpoints: a framework for object oriented database modelling and distribution

    Directory of Open Access Journals (Sweden)

    Fouzia Benchikha

    2006-01-01

    Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.

  13. Spatial distribution of clinical computer systems in primary care in England in 2016 and implications for primary care electronic medical record databases: a cross-sectional population study.

    Science.gov (United States)

    Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M

    2018-02-28

    UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  15. A Sandia telephone database system

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  16. Human Exposure Database System (HEDS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Human Exposure Database System (HEDS) provides public access to data sets, documents, and metadata from EPA on human exposure. It is primarily intended for...

  17. Distributed systems

    CERN Document Server

    Van Steen, Maarten

    2017-01-01

    For this third edition of "Distributed Systems," the material has been thoroughly revised and extended, integrating principles and paradigms into nine chapters: 1. Introduction 2. Architectures 3. Processes 4. Communication 5. Naming 6. Coordination 7. Replication 8. Fault tolerance 9. Security A separation has been made between basic material and more specific subjects. The latter have been organized into boxed sections, which may be skipped on first reading. To assist in understanding the more algorithmic parts, example programs in Python have been included. The examples in the book leave out many details for readability, but the complete code is available through the book's Website, hosted at www.distributed-systems.net.

  18. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  19. A Simulation Tool for Distributed Databases.

    Science.gov (United States)

    1981-09-01

    11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the

  20. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  1. A geographic information system on the potential distribution and abundance of Fasciola hepatica and F. gigantica in east Africa based on Food and Agriculture Organization databases.

    Science.gov (United States)

    Malone, J B; Gommes, R; Hansen, J; Yilma, J M; Slingenberg, J; Snijders, F; Nachtergaele, F; Ataman, E

    1998-07-31

    An adaptation of a previously developed climate forecast computer model and digital agroecologic database resources available from FAO for developing countries were used to develop a geographic information system risk assessment model for fasciolosis in East Africa, a region where both F. hepatica and F. gigantica occur as a cause of major economic losses in livestock. Regional F. hepatica and F. gigantica forecast index maps were created. Results were compared to environmental data parameters, known life cycle micro-environment requirements and to available Fasciola prevalence survey data and distribution patterns reported in the literature for each species (F. hepatica above 1200 m elevation, F. gigantica below 1800 m, both at 1200-1800 m). The greatest risk, for both species, occurred in areas of extended high annual rainfall associated with high soil moisture and surplus water, with risk diminishing in areas of shorter wet season and/or lower temperatures. Arid areas were generally unsuitable (except where irrigation, water bodies or floods occur) due to soil moisture deficit and/or, in the case of F. hepatica, high average annual mean temperature >23 degrees C. Regions in the highlands of Ethiopia and Kenya were identified as unsuitable for F. gigantica due to inadequate thermal regime, below the 600 growing degree days required for completion of the life cycle in a single year. The combined forecast index (F. hepatica+F. gigantica) was significantly correlated to prevalence data available for 260 of the 1220 agroecologic crop production system zones (CPSZ) and to average monthly normalized difference vegetation index (NDVI) values derived from the advanced very high resolution radiometer (AVHRR) sensor on board the NOAA polar-orbiting satellites. For use in Fasciola control programs, results indicate that monthly forecast parameters, developed in a GIS with digital agroecologic zone databases and monthly climate databases, can be used to define the

  2. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  3. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  4. The CMS Condition Database system

    CERN Document Server

    Govi, Giacomo Maria; Ojeda-Sandonis, Miguel; Pfeiffer, Andreas; Sipos, Roland

    2015-01-01

    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data taking starting in 2015. The re-design has focused in simplifying the architecture, using the lessons learned during the operation of the previous data-taking period. In the new system the relational features of the database schema are mainly exploited to handle the metadata ( Tag and Interval of Validity ), allowing for a limited and controlled set of queries. The bulk condition data ( Payloads ) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this presentation, we describe the full architecture of the system, including the serv...

  5. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  6. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...

  7. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  8. A distribution management system

    Energy Technology Data Exchange (ETDEWEB)

    Jaerventausta, P.; Verho, P.; Kaerenlampi, M.; Pitkaenen, M. [Tampere Univ. of Technology (Finland); Partanen, J. [Lappeenranta Univ. of Technology (Finland)

    1998-08-01

    The development of new distribution automation applications is considerably wide nowadays. One of the most interesting areas is the development of a distribution management system (DMS) as an expansion to the traditional SCADA system. At the power transmission level such a system is called an energy management system (EMS). The idea of these expansions is to provide supporting tools for control center operators in system analysis and operation planning. Nowadays the SCADA is the main computer system (and often the only) in the control center. However, the information displayed by the SCADA is often inadequate, and several tasks cannot be solved by a conventional SCADA system. A need for new computer applications in control center arises from the insufficiency of the SCADA and some other trends. The latter means that the overall importance of the distribution networks is increasing. The slowing down of load-growth has often made network reinforcements unprofitable. Thus the existing network must be operated more efficiently. At the same time larger distribution areas are for economical reasons being monitored at one control center and the size of the operation staff is decreasing. The quality of supply requirements are also becoming stricter. The needed data for new applications is mainly available in some existing systems. Thus the computer systems of utilities must be integrated. The main data source for the new applications in the control center are the AM/FM/GIS (i.e. the network database system), the SCADA, and the customer information system (CIS). The new functions can be embedded in some existing computer system. This means a strong dependency on the vendor of the existing system. An alternative strategy is to develop an independent system which is integrated with other computer systems using well-defined interfaces. The latter approach makes it possible to use the new applications in various computer environments, having only a weak dependency on the

  9. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  10. Development of database on the distribution coefficient. 2. Preparation of database

    International Nuclear Information System (INIS)

    Takebe, Shinichi; Abe, Masayoshi

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  11. Integrated spent nuclear fuel database system

    International Nuclear Information System (INIS)

    Henline, S.P.; Klingler, K.G.; Schierman, B.H.

    1994-01-01

    The Distributed Information Systems software Unit at the Idaho National Engineering Laboratory has designed and developed an Integrated Spent Nuclear Fuel Database System (ISNFDS), which maintains a computerized inventory of all US Department of Energy (DOE) spent nuclear fuel (SNF). Commercial SNF is not included in the ISNFDS unless it is owned or stored by DOE. The ISNFDS is an integrated, single data source containing accurate, traceable, and consistent data and provides extensive data for each fuel, extensive facility data for every facility, and numerous data reports and queries

  12. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  13. Content And Multimedia Database Management Systems

    NARCIS (Netherlands)

    de Vries, A.P.

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data

  14. LHCb Conditions database operation assistance systems

    Science.gov (United States)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  15. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  16. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  17. 2011 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2011 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  18. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  19. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  20. An Introduction to Database Management Systems.

    Science.gov (United States)

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  1. Airports and Navigation Aids Database System -

    Data.gov (United States)

    Department of Transportation — Airport and Navigation Aids Database System is the repository of aeronautical data related to airports, runways, lighting, NAVAID and their components, obstacles, no...

  2. Microcomputer Database Management Systems for Bibliographic Data.

    Science.gov (United States)

    Pollard, Richard

    1986-01-01

    Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)

  3. Distributed road assessment system

    Science.gov (United States)

    Beer, N. Reginald; Paglieroni, David W

    2014-03-25

    A system that detects damage on or below the surface of a paved structure or pavement is provided. A distributed road assessment system includes road assessment pods and a road assessment server. Each road assessment pod includes a ground-penetrating radar antenna array and a detection system that detects road damage from the return signals as the vehicle on which the pod is mounted travels down a road. Each road assessment pod transmits to the road assessment server occurrence information describing each occurrence of road damage that is newly detected on a current scan of a road. The road assessment server maintains a road damage database of occurrence information describing the previously detected occurrences of road damage. After the road assessment server receives occurrence information for newly detected occurrences of road damage for a portion of a road, the road assessment server determines which newly detected occurrences correspond to which previously detected occurrences of road damage.

  4. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  5. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  6. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... distributed consistency property. We will also illustrate how to use the countermeasures against the consistency anomalies in ERP systems integrated with heterogeneous E-commerce systems and the databases of mobile salesman ERP modules. The methods described in this paper may be used in so called CAP...

  7. Distributed event-based systems

    National Research Council Canada - National Science Library

    Mühl, Gero; Fiege, Ludger; Pietzuch, Peter

    2006-01-01

    ... systems cannot be assessed from a database, network, or software engineering perspective alone. In the same sense, commercially available products that could help solving problems of event-based architectures are often bundled and marketed in solutions of a specific domain. In order to channel some of the attention, the Distributed Event...

  8. Column-Oriented Database Systems (Tutorial)

    NARCIS (Netherlands)

    D. Abadi; P.A. Boncz (Peter); S. Harizopoulos

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as

  9. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.

    2007-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  10. A Database Approach to Distributed State Space Generation

    NARCIS (Netherlands)

    Blom, Stefan; Lisser, Bert; van de Pol, Jan Cornelis; Weber, M.; Cerna, I.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    We study distributed state space generation on a cluster of workstations. It is explained why state space partitioning by a global hash function is problematic when states contain variables from unbounded domains, such as lists or other recursive datatypes. Our solution is to introduce a database

  11. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  12. The RMS program system and database

    International Nuclear Information System (INIS)

    Fisher, S.M.; Peach, K.J.

    1982-08-01

    This report describes the program system developed for the data reduction and analysis of data obtained with the Rutherford Multiparticle Spectrometer (RMS), with particular emphasis on the utility of a well structured central data-base. (author)

  13. Resource Survey Relational Database Management System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Mississippi Laboratories employ both enterprise and localized data collection systems for recording data. The databases utilized by these applications range from...

  14. Energy database system of NEDO

    International Nuclear Information System (INIS)

    Kimura, Noburu

    1990-01-01

    As to the offer of technical information and others to foreign countries by Japan, the state of more import than export has been criticized internationally. The NEDO energy data base explained in this report is to make the international contribution of information, and based on the Energy Technology Data Exchange Agreement concluded between 13 countries taking part in the IEA and France, the participating countries offer their own technical information on energy, the operating organization collects them and makes the data base, and NEDO systematizes it for distribution. The IEA and the activities of exchanging information, the course of starting the Energy Technology Data Exchange Agreement and its contents, and the works of NEDO based on the Agreement are described. As for the literatures which are not sold on the market, their texts are exchanged. As to the composition of the data base, according to the example in 1988, about 1/3 were directly related to energy, and the rest 2/3 were indirectly related to energy technology. The features of the data base and the method of its utilization are explained. (K.I.)

  15. Database, expert systems, information retrieval

    International Nuclear Information System (INIS)

    Fedele, P.; Grandoni, G.; Mammarella, M.C.

    1989-12-01

    The great debate concerning the Italian high-school reform has induced a ferment of activity among the most interested and sensible of people. This was clearly demonstrated by the course 'Innovazione metodologico-didattica e tecnologie informatiche' organized for the staff of the 'lstituto Professionale L. Einaudi' of Lamezia Terme. The course was an interesting opportunity for discussions and interaction between the world of School and computer technology used in the Research field. This three day course included theoretical and practical lessons, showing computer facilities that could be useful for teaching. During the practical lessons some computer tools were presented from the very simple Electronic Sheets to the more complicated information Retrieval on CD-ROM interactive realizations. The main topics will be discussed later. They are: Modelling, Data Base, Integrated Information Systems, Expert Systems, Information Retrieval. (author)

  16. Deductive databases and P systems

    Directory of Open Access Journals (Sweden)

    Miguel A. Gutierrez-Naranjo

    2004-06-01

    Full Text Available In computational processes based on backwards chaining, a rule of the type is seen as a procedure which points that the problem can be split into the problems. In classical devices, the subproblems are solved sequentially. In this paper we present some questions that circulated during the Second Brainstorming Week related to the application of the parallelism of P systems to computation based on backwards chaining on the example of inferential deductive process.

  17. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  18. Drinking Water Distribution Systems

    Science.gov (United States)

    Learn about an overview of drinking water distribution systems, the factors that degrade water quality in the distribution system, assessments of risk, future research about these risks, and how to reduce cross-connection control risk.

  19. A web-based audiometry database system

    OpenAIRE

    Chung-Hui Yeh; Sung-Tai Wei; Tsung-Wen Chen; Ching-Yuang Wang; Ming-Hsui Tsai; Chia-Der Lin

    2014-01-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this ...

  20. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  1. Database application research in real-time data access of accelerator control system

    International Nuclear Information System (INIS)

    Chen Guanghua; Chen Jianfeng; Wan Tianmin

    2012-01-01

    The control system of Shanghai Synchrotron Radiation Facility (SSRF) is a large-scale distributed real-time control system, It involves many types and large amounts of real-time data access during the operating. Database system has wide application prospects in the large-scale accelerator control system. It is the future development direction of the accelerator control system, to replace the differently dedicated data structures with the mature standardized database system. This article discusses the application feasibility of database system in accelerators based on the database interface technology, real-time data access testing, and system optimization research and to establish the foundation of the wide scale application of database system in the SSRF accelerator control system. Based on the database interface technology, real-time data access testing and system optimization research, this article will introduce the application feasibility of database system in accelerators, and lay the foundation of database system application in the SSRF accelerator control system. (authors)

  2. Implementing a Microcomputer Database Management System.

    Science.gov (United States)

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  3. A Database Management System for Interlibrary Loan.

    Science.gov (United States)

    Chang, Amy

    1990-01-01

    Discusses the increasing complexity of dealing with interlibrary loan requests and describes a database management system for interlibrary loans used at Texas Tech University. System functions are described, including file control, records maintenance, and report generation, and the impact on staff productivity is discussed. (CLB)

  4. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  5. Links in a distributed database: Theory and implementation

    International Nuclear Information System (INIS)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides

  6. Similarity joins in relational database systems

    CERN Document Server

    Augsten, Nikolaus

    2013-01-01

    State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance comput

  7. Function and organization of CPC database system

    International Nuclear Information System (INIS)

    Yoshida, Tohru; Tomiyama, Mineyoshi.

    1986-02-01

    It is very time-consuming and expensive work to develop computer programs. Therefore, it is desirable to effectively use the existing program. For this purpose, it is required for researchers and technical staffs to obtain the relevant informations easily. CPC (Computer Physics Communications) is a journal published to facilitate the exchange of physics programs and of the relevant information about the use of computers in the physics community. There are about 1300 CPC programs in JAERI computing center, and the number of programs is increasing. A new database system (CPC database) has been developed to manage the CPC programs and their information. Users obtain information about all the programs stored in the CPC database. Also users can find and copy the necessary program by inputting the program name, the catalogue number and the volume number. In this system, each operation is done by menu selection. Every CPC program is compressed and stored in the database; the required storage size is one third of the non-compressed format. Programs unused for a long time are moved to magnetic tape. The present report describes the CPC database system and the procedures for its use. (author)

  8. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.

  9. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    White David

    2008-01-01

    Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.

  10. Developing a nursing database system in Kenya.

    Science.gov (United States)

    Riley, Patricia L; Vindigni, Stephen M; Arudo, John; Waudo, Agnes N; Kamenju, Andrew; Ngoya, Japheth; Oywer, Elizabeth O; Rakuom, Chris P; Salmon, Marla E; Kelley, Maureen; Rogers, Martha; St Louis, Michael E; Marum, Lawrence H

    2007-06-01

    To describe the development, initial findings, and implications of a national nursing workforce database system in Kenya. Creating a national electronic nursing workforce database provides more reliable information on nurse demographics, migration patterns, and workforce capacity. Data analyses are most useful for human resources for health (HRH) planning when workforce capacity data can be linked to worksite staffing requirements. As a result of establishing this database, the Kenya Ministry of Health has improved capability to assess its nursing workforce and document important workforce trends, such as out-migration. Current data identify the United States as the leading recipient country of Kenyan nurses. The overwhelming majority of Kenyan nurses who elect to out-migrate are among Kenya's most qualified. The Kenya nursing database is a first step toward facilitating evidence-based decision making in HRH. This database is unique to developing countries in sub-Saharan Africa. Establishing an electronic workforce database requires long-term investment and sustained support by national and global stakeholders.

  11. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  12. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  13. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  14. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  15. Smart Distribution Systems

    Directory of Open Access Journals (Sweden)

    Yazhou Jiang

    2016-04-01

    Full Text Available The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. A comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD, is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs is introduced. Future research in a smart distribution environment is proposed.

  16. Electric distribution systems

    CERN Document Server

    Sallam, A A

    2010-01-01

    "Electricity distribution is the penultimate stage in the delivery of electricity to end users. The only book that deals with the key topics of interest to distribution system engineers, Electric Distribution Systems presents a comprehensive treatment of the subject with an emphasis on both the practical and academic points of view. Reviewing traditional and cutting-edge topics, the text is useful to practicing engineers working with utility companies and industry, undergraduate graduate and students, and faculty members who wish to increase their skills in distribution system automation and monitoring."--

  17. Layered approach to multimedia database systems

    Science.gov (United States)

    Arndt, Timothy; Guercio, Angela; Maresca, Paolo

    1999-08-01

    Multimode database systems are becoming increasingly important as organizations accumulate more multimedia data. There are few solutions that allow the information to be stored and managed efficiently. Relational systems provide features that organization rely on for their alphanumeric data. Unfortunately, these system lack facilities necessary for the handling of multimedia data - things like media integration, composition and presentation, multimedia interface and interactivity, imprecise query support, and multimedia indexing. One solution suggested for storage of multimedia data is the use of object-oriented database management system as a layer on top of the relational system. The layer adds required multimedia functionality to the capabilities provided by the relational system. A prototype solution implemented in Java uses the facilities offered by JDBC to provide connection to a large number of databases. Java Media Framework is used to present the video and audio data. Among the facilities provided are image/video/audio display/playback and extension of SQL to include multimedia operators and functions.

  18. Distributed Operating Systems

    NARCIS (Netherlands)

    Mullender, Sape J.

    1987-01-01

    In the past five years, distributed operating systems research has gone through a consolidation phase. On a large number of design issues there is now considerable consensus between different research groups. In this paper, an overview of recent research in distributed systems is given. In turn, the

  19. Distributed Operating Systems

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Renesse, R.

    1985-01-01

    Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a

  20. Pervasive Electricity Distribution System

    Directory of Open Access Journals (Sweden)

    Muhammad Usman Tahir

    2017-06-01

    Full Text Available Now a days a country cannot become economically strong until and unless it has enough electrical power to fulfil industrial and domestic needs. Electrical power being the pillar of any country’s economy, needs to be used in an efficient way. The same step is taken here by proposing a new system for energy distribution from substation to consumer houses, also it monitors the consumer consumption and record data. Unlike traditional manual Electrical systems, pervasive electricity distribution system (PEDS introduces a fresh perspective to monitor the feeder line status at distribution and consumer level. In this system an effort is taken to address the issues of electricity theft, manual billing, online monitoring of electrical distribution system and automatic control of electrical distribution points. The project is designed using microcontroller and different sensors, its GUI is designed in Labview software.

  1. Selection of nuclear power information database management system

    International Nuclear Information System (INIS)

    Zhang Shuxin; Wu Jianlei

    1996-01-01

    In the condition of the present database technology, in order to build the Chinese nuclear power information database (NPIDB) in the nuclear industry system efficiently at a high starting point, an important task is to select a proper database management system (DBMS), which is the hinge of the matter to build the database successfully. Therefore, this article explains how to build a practical information database about nuclear power, the functions of different database management systems, the reason of selecting relation database management system (RDBMS), the principles of selecting RDBMS, the recommendation of ORACLE management system as the software to build database and so on

  2. Consistency and Security in Mobile Real Time Distributed Database (MRTDDB): A Combinational Giant Challenge

    Science.gov (United States)

    Gupta, Gyanendra Kr.; Sharma, A. K.; Swaroop, Vishnu

    2010-11-01

    Many type of Information System are widely used in various fields. With the hasty development of computer network, Information System users care more about data sharing in networks. In traditional relational database, data consistency was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but can not update it. If the traditional consistency control method has been used yet, the system's concurrency will be inadequately influenced. So there are many new necessities for the consistency control and security in MRTDDB. The problem not limited only to type of data (e.g. mobile or real-time databases). There are many aspects of data consistency problems in MRTDDB, such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects has been modified. In this paper, many cases of consistency are discussed. As the mobile computing becomes well liked and the database grows with information sharing security is a big issue for researchers. Consistency and Security of data is a big challenge for researchers because when ever the data is not consistent and secure no maneuver on the data (e.g. transaction) is productive. It becomes more and more crucial when the transactions are used in non-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for consistency and security of databases. Traditional Database Security has focused primarily on creating user accounts and managing user privileges to database objects. But in the mobility and nomadic computing uses these database creating a new opportunities for research. The wide spread use of databases over the web, heterogeneous client-server architectures, application servers, and networks creates a critical need to amplify this focus. In this paper we also discuss an overview of the new and old

  3. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  4. Portable database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig

  5. Portable database driven control system for SPEAR

    Energy Technology Data Exchange (ETDEWEB)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig.

  6. Cooling water distribution system

    Science.gov (United States)

    Orr, Richard

    1994-01-01

    A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.

  7. Development of environment radiation database management system

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Jong Gyu; Chung, Chang Hwa; Ryu, Chan Ho; Lee, Jin Yeong; Kim, Dong Hui; Lee, Hun Sun [Daeduk College, Taejon (Korea, Republic of)

    1999-03-15

    In this development, we constructed a database for efficient data processing and operating of radiation-environment related data. Se developed the source documents retrieval system and the current status printing system that supports a radiation environment dta collection, pre-processing and analysis. And, we designed and implemented the user interfaces and DB access routines based on WWW service policies on KINS Intranet. It is expected that the developed system, which organizes the information related to environmental radiation data systematically can be utilize for the accurate interpretation, analysis and evaluation.

  8. Development of environment radiation database management system

    International Nuclear Information System (INIS)

    Kang, Jong Gyu; Chung, Chang Hwa; Ryu, Chan Ho; Lee, Jin Yeong; Kim, Dong Hui; Lee, Hun Sun

    1999-03-01

    In this development, we constructed a database for efficient data processing and operating of radiation-environment related data. Se developed the source documents retrieval system and the current status printing system that supports a radiation environment dta collection, pre-processing and analysis. And, we designed and implemented the user interfaces and DB access routines based on WWW service policies on KINS Intranet. It is expected that the developed system, which organizes the information related to environmental radiation data systematically can be utilize for the accurate interpretation, analysis and evaluation

  9. HATCHES - a thermodynamic database and management system

    International Nuclear Information System (INIS)

    Cross, J.E.; Ewart, F.T.

    1990-03-01

    The Nirex Safety Assessment Research Programme has been compiling the thermodynamic data necessary to allow simulations of the aqueous behaviour of the elements important to radioactive waste disposal to be made. These data have been obtained from the literature, when available, and validated for the conditions of interest by experiment. In order to maintain these data in an accessible form and to satisfy quality assurance on all data used for assessments, a database has been constructed which resides on a personal computer operating under MS-DOS using the Ashton-Tate dBase III program. This database contains all the input data fields required by the PHREEQE program and, in addition, a body of text which describes the source of the data and the derivation of the PHREEQE input parameters from the source data. The HATCHES system consists of this database, a suite of programs to facilitate the searching and listing of data and a further suite of programs to convert the dBase III files to PHREEQE database format. (Author)

  10. MySQL vs. Cassandra database storage system

    Indian Academy of Sciences (India)

    Danijela Milošević

    database system, MySQL and NoSQL, key value store system, ApacheCassandra, on the database layer. The. CPU times are compared and discussed. Keywords. Generalized inverse; weighted Moore–Penrose inverse; PHP programming; MySQL database;. NoSQL Cassandra database storage system. 1. Introduction.

  11. Distribution System White Papers

    Science.gov (United States)

    EPA worked with stakeholders and developed a series of white papers on distribution system issues ranked of potentially significant public health concern (see list below) to serve as background material for EPA, expert and stakeholder discussions.

  12. Nuclear Criticality Information System. Database examples

    Energy Technology Data Exchange (ETDEWEB)

    Foret, C.A.

    1984-06-01

    The purpose of this publication is to provide our users with a guide to using the Nuclear Criticality Information System (NCIS). It is comprised of an introduction, an information and resources section, a how-to-use section, and several useful appendices. The main objective of this report is to present a clear picture of the NCIS project and its available resources as well as assisting our users in accessing the database and using the TIS computer to process data. The introduction gives a brief description of the NCIS project, the Technology Information System (TIS), online user information, future plans and lists individuals to contact for additional information about the NCIS project. The information and resources section outlines the NCIS database and describes the resources that are available. The how-to-use section illustrates access to the NCIS database as well as searching datafiles for general or specific data. It also shows how to access and read the NCIS news section as well as connecting to other information centers through the TIS computer.

  13. Nuclear Criticality Information System. Database examples

    International Nuclear Information System (INIS)

    Foret, C.A.

    1984-06-01

    The purpose of this publication is to provide our users with a guide to using the Nuclear Criticality Information System (NCIS). It is comprised of an introduction, an information and resources section, a how-to-use section, and several useful appendices. The main objective of this report is to present a clear picture of the NCIS project and its available resources as well as assisting our users in accessing the database and using the TIS computer to process data. The introduction gives a brief description of the NCIS project, the Technology Information System (TIS), online user information, future plans and lists individuals to contact for additional information about the NCIS project. The information and resources section outlines the NCIS database and describes the resources that are available. The how-to-use section illustrates access to the NCIS database as well as searching datafiles for general or specific data. It also shows how to access and read the NCIS news section as well as connecting to other information centers through the TIS computer

  14. Distributed System Contract Monitoring

    Directory of Open Access Journals (Sweden)

    Adrian Francalanza Ph.D

    2011-09-01

    Full Text Available The use of behavioural contracts, to specify, regulate and verify systems, is particularly relevant to runtime monitoring of distributed systems. System distribution poses major challenges to contract monitoring, from monitoring-induced information leaks to computation load balancing, communication overheads and fault-tolerance. We present mDPi, a location-aware process calculus, for reasoning about monitoring of distributed systems. We define a family of Labelled Transition Systems for this calculus, which allow formal reasoning about different monitoring strategies at different levels of abstractions. We also illustrate the expressivity of the calculus by showing how contracts in a simple contract language can be synthesised into different mDPi monitors.

  15. GIS database and discussion for the distribution, composition, and age of Cenozoic volcanic rocks of the Pacific Northwest Volcanic Aquifer System study area

    Science.gov (United States)

    Sherrod, David R.; Keith, Mackenzie K.

    2018-03-30

    A substantial part of the U.S. Pacific Northwest is underlain by Cenozoic volcanic and continental sedimentary rocks and, where widespread, these strata form important aquifers. The legacy geologic mapping presented with this report contains new thematic categorization added to state digital compilations published by the U.S. Geological Survey for Oregon, California, Idaho, Nevada, Utah, and Washington (Ludington and others, 2005). Our additional coding is designed to allow rapid characterization, mainly for hydrogeologic purposes, of similar rocks and deposits within a boundary expanded slightly beyond that of the Pacific Northwest Volcanic Aquifer System study area. To be useful for hydrogeologic analysis and to be more statistically manageable, statewide compilations from Ludington and others (2005) were mosaicked into a regional map and then reinterpreted into four main categories on the basis of (1) age, (2) composition, (3) hydrogeologic grouping, and (4) lithologic pattern. The coding scheme emphasizes Cenozoic volcanic or volcanic-related rocks and deposits, and of primary interest are the codings for composition and age.

  16. Distributed processor systems

    International Nuclear Information System (INIS)

    Zacharov, B.

    1976-01-01

    In recent years, there has been a growing tendency in high-energy physics and in other fields to solve computational problems by distributing tasks among the resources of inter-coupled processing devices and associated system elements. This trend has gained further momentum more recently with the increased availability of low-cost processors and with the development of the means of data distribution. In two lectures, the broad question of distributed computing systems is examined and the historical development of such systems reviewed. An attempt is made to examine the reasons for the existence of these systems and to discern the main trends for the future. The components of distributed systems are discussed in some detail and particular emphasis is placed on the importance of standards and conventions in certain key system components. The ideas and principles of distributed systems are discussed in general terms, but these are illustrated by a number of concrete examples drawn from the context of the high-energy physics environment. (Auth.)

  17. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    Science.gov (United States)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also

  18. OAP- OFFICE AUTOMATION PILOT GRAPHICS DATABASE SYSTEM

    Science.gov (United States)

    Ackerson, T.

    1994-01-01

    The Office Automation Pilot (OAP) Graphics Database system offers the IBM PC user assistance in producing a wide variety of graphs and charts. OAP uses a convenient database system, called a chartbase, for creating and maintaining data associated with the charts, and twelve different graphics packages are available to the OAP user. Each of the graphics capabilities is accessed in a similar manner. The user chooses creation, revision, or chartbase/slide show maintenance options from an initial menu. The user may then enter or modify data displayed on a graphic chart. The cursor moves through the chart in a "circular" fashion to facilitate data entries and changes. Various "help" functions and on-screen instructions are available to aid the user. The user data is used to generate the graphics portion of the chart. Completed charts may be displayed in monotone or color, printed, plotted, or stored in the chartbase on the IBM PC. Once completed, the charts may be put in a vector format and plotted for color viewgraphs. The twelve graphics capabilities are divided into three groups: Forms, Structured Charts, and Block Diagrams. There are eight Forms available: 1) Bar/Line Charts, 2) Pie Charts, 3) Milestone Charts, 4) Resources Charts, 5) Earned Value Analysis Charts, 6) Progress/Effort Charts, 7) Travel/Training Charts, and 8) Trend Analysis Charts. There are three Structured Charts available: 1) Bullet Charts, 2) Organization Charts, and 3) Work Breakdown Structure (WBS) Charts. The Block Diagram available is an N x N Chart. Each graphics capability supports a chartbase. The OAP graphics database system provides the IBM PC user with an effective means of managing data which is best interpreted as a graphic display. The OAP graphics database system is written in IBM PASCAL 2.0 and assembler for interactive execution on an IBM PC or XT with at least 384K of memory, and a color graphics adapter and monitor. Printed charts require an Epson, IBM, OKIDATA, or HP Laser

  19. Establishment of Database System for Radiation Oncology

    International Nuclear Information System (INIS)

    Kim, Dae Sup; Lee, Chang Ju; Yoo, Soon Mi; Kim, Jong Min; Lee, Woo Seok; Kang, Tae Young; Back, Geum Mun; Hong, Dong Ki; Kwon, Kyung Tae

    2008-01-01

    To enlarge the efficiency of operation and establish a constituency for development of new radiotherapy treatment through database which is established by arranging and indexing radiotherapy related affairs in well organized manner to have easy access by the user. In this study, Access program provided by Microsoft (MS Office Access) was used to operate the data base. The data of radiation oncology was distinguished by a business logs and maintenance expenditure in addition to stock management of accessories with respect to affairs and machinery management. Data for education and research was distinguished by education material for department duties, user manual and related thesis depending upon its property. Registration of data was designed to have input form according to its subject and the information of data was designed to be inspected by making a report. Number of machine failure in addition to its respective repairing hours from machine maintenance expenditure in a period of January 2008 to April 2009 was analyzed with the result of initial system usage and one year after the usage. Radiation oncology database system was accomplished by distinguishing work related and research related criteria. The data are arranged and collected according to its subjects and classes, and can be accessed by searching the required data through referring the descriptions from each criteria. 32.3% of total average time was reduced on analyzing repairing hours by acquiring number of machine failure in addition to its type in a period of January 2008 to April 2009 through machine maintenance expenditure. On distinguishing and indexing present and past data upon its subjective criteria through the database system for radiation oncology, the use of information can be easily accessed to enlarge the efficiency of operation, and in further, can be a constituency for improvement of work process by acquiring various information required for new radiotherapy treatment in real time.

  20. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  1. Dissipative distributed systems

    NARCIS (Netherlands)

    Willems, JC; Djaferis, TE; Schick, IC

    2000-01-01

    A controllable distributed dynamical system described by a system of linear constant-coefficient partial differential equations is said to be conservative if for compact support trajectories the integral of the supply rate is zero. It is said to be dissipative if this integral is non-negative. The

  2. Distributed Treatment Systems.

    Science.gov (United States)

    Zgonc, David; Plante, Luke

    2017-10-01

    This section presents a review of the literature published in 2016 on topics relating to distributed treatment systems. This review is divided into the following sections with multiple subsections under each: constituent removal; treatment technologies; and planning and treatment system management.

  3. Advanced Traffic Management Systems (ATMS) research analysis database system

    Science.gov (United States)

    2001-06-01

    The ATMS Research Analysis Database Systems (ARADS) consists of a Traffic Software Data Dictionary (TSDD) and a Traffic Software Object Model (TSOM) for application to microscopic traffic simulation and signal optimization domains. The purpose of thi...

  4. Distribution management system

    Energy Technology Data Exchange (ETDEWEB)

    Verho, P.; Kaerenlampi, M.; Pitkaenen, M.; Jaerventausta, P.; Partanen, J.

    1997-12-31

    This report comprises a general description of the results obtained in the research projects `Information system applications of a distribution control center`, `Event analysis in primary substation`, and `Distribution management system` of the EDISON research program during the years of 1993 - 1997. The different domains of the project are presented in more detail in other reports. An operational state analysis of a distribution network has been made from the control center point of view and the functions which can not be solved by a conventional SCADA system are determined. The basis for new computer applications is shown to be integration of the computer systems. The main result of the work is a distribution management system (DMS), which is an autonomous system integrated to the existing information systems, SCADA and AM/FM/GIS. The system uses a large number of modelling and computation methods and provides an extensive group of advanced functions to support the distribution network monitoring, fault management, operations planning and optimization. The development platform of the system consists of a Visual C++ programming environment, Windows NT operating system and PC. During the development the DMS has been tested in a pilot utility and it is nowadays in practical use in several Finnish utilities. The use of a DMS improves the quality and economy of power supply in many ways; the outage times can, in particular, be reduced using the system. Based on the achieved experiences some parts of the DMS reached the commercialization phase, too. Initially the commercial products were developed by a software company, Versoft Oy. At present the research results are the basis of a worldwide software product supplied by ABB Transmit Co. (orig.) EDISON Research Programme. 28 refs.

  5. Solvent Handbook Database System user's manual

    International Nuclear Information System (INIS)

    1993-03-01

    Industrial solvents and cleaners are used in maintenance facilities to remove wax, grease, oil, carbon, machining fluids, solder fluxes, mold release, and various other contaminants from parts, and to prepare the surface of various metals. However, because of growing environmental and worker-safety concerns, government regulations have already excluded the use of some chemicals and have restricted the use of halogenated hydrocarbons because they affect the ozone layer and may cause cancer. The Solvent Handbook Database System lets you view information on solvents and cleaners, including test results on cleaning performance, air emissions, recycling and recovery, corrosion, and non-metals compatibility. Company and product safety information is also available

  6. ASEAN Mineral Database and Information System (AMDIS)

    Science.gov (United States)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  7. Development of knowledge base system linked to material database

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Tsuji, Hirokazu; Mashiko, Shinichi; Miyakawa, Shunichi; Fujita, Mitsutane; Kinugawa, Junichi; Iwata, Shuichi

    2002-01-01

    The distributed material database system named 'Data-Free-Way' has been developed by four organizations (the National Institute for Materials Science, the Japan Atomic Energy Research Institute, the Japan Nuclear Cycle Development Institute, and the Japan Science and Technology Corporation) under a cooperative agreement in order to share fresh and stimulating information as well as accumulated information for the development of advanced nuclear materials, for the design of structural components, etc. In order to create additional values of the system, knowledge base system, in which knowledge extracted from the material database is expressed, is planned to be developed for more effective utilization of Data-Free-Way. XML (eXtensible Markup Language) has been adopted as the description method of the retrieved results and the meaning of them. One knowledge note described with XML is stored as one knowledge which composes the knowledge base. Since this knowledge note is described with XML, the user can easily convert the display form of the table and the graph into the data format which the user usually uses. This paper describes the current status of Data-Free-Way, the description method of knowledge extracted from the material database with XML and the distributed material knowledge base system. (author)

  8. Dynamic graph system for a semantic database

    Science.gov (United States)

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  9. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  10. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  11. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  12. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  13. Optimizing electrical distribution systems

    International Nuclear Information System (INIS)

    Scott, W.G.

    1990-01-01

    Electrical utility distribution systems are in the middle of an unprecedented technological revolution in planning, design, maintenance and operation. The prime movers of the revolution are the major economic shifts that affect decision making. The major economic influence on the revolution is the cost of losses (technical and nontechnical). The vehicle of the revolution is the computer, which enables decision makers to examine alternatives in greater depth and detail than their predecessors could. The more important elements of the technological revolution are: system planning, computers, load forecasting, analytical systems (primary systems, transformers and secondary systems), system losses and coming technology. The paper is directed towards the rather unique problems encountered by engineers of utilities in developing countries - problems that are being solved through high technology, such as the recent World Bank-financed engineering computer system for Sri Lanka. This system includes a DEC computer, digitizer, plotter and engineering software to model the distribution system via a digitizer, analyse the system and plot single-line diagrams. (author). 1 ref., 4 tabs., 6 figs

  14. LINGUISTIC DATABASE FOR AUTOMATIC GENERATION SYSTEM OF ENGLISH ADVERTISING TEXTS

    Directory of Open Access Journals (Sweden)

    N. A. Metlitskaya

    2017-01-01

    Full Text Available The article deals with the linguistic database for the system of automatic generation of English advertising texts on cosmetics and perfumery. The database for such a system includes two main blocks: automatic dictionary (that contains semantic and morphological information for each word, and semantic-syntactical formulas of the texts in a special formal language SEMSINT. The database is built on the result of the analysis of 30 English advertising texts on cosmetics and perfumery. First, each word was given a unique code. For example, N stands for nouns, A – for adjectives, V – for verbs, etc. Then all the lexicon of the analyzed texts was distributed into different semantic categories. According to this semantic classification each word was given a special semantic code. For example, the record N01 that is attributed to the word «lip» in the dictionary means that this word refers to nouns of the semantic category «part of a human’s body».The second block of the database includes the semantic-syntactical formulas of the analyzed advertising texts written in a special formal language SEMSINT. The author gives a brief description of this language, presenting its essence and structure. Also, an example of one formalized advertising text in SEMSINT is provided.

  15. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  16. Implementing database system for LHCb publications page

    CERN Document Server

    Abdullayev, Fakhriddin

    2017-01-01

    The LHCb is one of the main detectors of Large Hadron Collider, where physicists and scientists work together on high precision measurements of matter-antimatter asymmetries and searches for rare and forbidden decays, with the aim of discovering new and unexpected forces. The work does not only consist of analyzing data collected from experiments but also in publishing the results of those analyses. The LHCb publications are gathered on LHCb publications page to maximize their availability to both LHCb members and to the high energy community. In this project a new database system was implemented for LHCb publications page. This will help to improve access to research papers for scientists and better integration with current CERN library website and others.

  17. Distributed operating system for NASA ground stations

    Science.gov (United States)

    Doyle, John F.

    1987-01-01

    NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.

  18. The Signal Distribution System

    CERN Document Server

    Belohrad, D; CERN. Geneva. AB Department

    2005-01-01

    For the purpose of LHC signal observation and high frequency signal distribution, the Signal Distribution System (SDS) was built. The SDS can contain up to 5 switching elements, where each element allows the user to switch between one of the maximum 8 bi-directional signals. The coaxial relays are used to switch the signals. Depending of the coaxial relay type used, the transfer bandwidth can go up to 18GHz. The SDS is controllable via TCP/IP, parallel port, or locally by rotary switch.

  19. Development of web database system for JAERI ERL-FEL

    International Nuclear Information System (INIS)

    Kikuzawa, Nobuhiro

    2005-01-01

    The accelerator control system for the JAERI ERL-FEL is a PC-based distributed control system. The accelerator status record is stored automatically through the control system to analyze the influence on the electron beam. In order to handle effectively a large number of stored data, it is necessary that the required data can be searched and visualized in easy operation. For this reason, a web database (DB) system which can search of the required data and display visually on a web browser was developed by using open source software. With introduction of this system, accelerator operators can monitor real-time information anytime, anywhere through a web browser. Development of the web DB system is described in this paper. (author)

  20. Distributed Optimization System

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  1. EU-ADR healthcare database network vs. spontaneous reporting system database: preliminary comparison of signal detection.

    Science.gov (United States)

    Trifirò, Gianluca; Patadia, Vaishali; Schuemie, Martijn J; Coloma, Preciosa M; Gini, Rosa; Herings, Ron; Hippisley-Cox, Julia; Mazzaglia, Giampiero; Giaquinto, Carlo; Scotti, Lorenza; Pedersen, Lars; Avillach, Paul; Sturkenboom, Miriam C J M; van der Lei, Johan; Eu-Adr Group

    2011-01-01

    The EU-ADR project aims to exploit different European electronic healthcare records (EHR) databases for drug safety signal detection. In this paper we report the preliminary results concerning the comparison of signal detection between EU-ADR network and two spontaneous reporting databases, the Food and Drug Administration and World Health Organization databases. EU-ADR data sources consist of eight databases in four countries (Denmark, Italy, Netherlands, and United Kingdom) that are virtually linked through distributed data network. A custom-built software (Jerboa©) elaborates harmonized input data that are produced locally and generates aggregated data which are then stored in a central repository. Those data are subsequently analyzed through different statistics (i.e. Longitudinal Gamma Poisson Shrinker). As potential signals, all the drugs that are associated to six events of interest (bullous eruptions - BE, acute renal failure - ARF, acute myocardial infarction - AMI, anaphylactic shock - AS, rhabdomyolysis - RHABD, and upper gastrointestinal bleeding - UGIB) have been detected via different data mining techniques in the two systems. Subsequently a comparison concerning the number of drugs that could be investigated and the potential signals detected for each event in the spontaneous reporting systems (SRSs) and EU-ADR network was made. SRSs could explore, as potential signals, a larger number of drugs for the six events, in comparison to EU-ADR (range: 630-3,393 vs. 87-856), particularly for those events commonly thought to be potentially drug-induced (i.e. BE: 3,393 vs. 228). The highest proportion of signals detected in SRSs was found for BE, ARF and AS, while for ARF, and UGIB in EU-ADR. In conclusion, it seems that EU-ADR longitudinal database network may complement traditional spontaneous reporting system for signal detection, especially for those adverse events that are frequent in general population and are not commonly thought to be drug

  2. Characterization analysis database system (CADS). A system overview

    International Nuclear Information System (INIS)

    1997-12-01

    The CADS database is a standardized, quality-assured, and configuration-controlled data management system developed to assist in the task of characterizing the DOE surplus HEU material. Characterization of the surplus HEU inventory includes identifying the specific material; gathering existing data about the inventory; defining the processing steps that may be necessary to prepare the material for transfer to a blending site; and, ultimately, developing a range of the preliminary cost estimates for those processing steps. Characterization focuses on producing commercial reactor fuel as the final step in material disposition. Based on the project analysis results, the final determination will be made as to the viability of the disposition path for each particular item of HEU. The purpose of this document is to provide an informational overview of the CADS database, its evolution, and its current capabilities. This document describes the purpose of CADS, the system requirements it fulfills, the database structure, and the operational guidelines of the system

  3. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  4. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  5. Audit Database and Information Tracking System

    Data.gov (United States)

    Social Security Administration — This database contains information about the Social Security Administration's audits regarding SSA agency performance and compliance. These audits can be requested...

  6. Minority Serving Institutions Reporting System Database

    Data.gov (United States)

    Social Security Administration — The database will be used to track SSA's contributions to Minority Serving Institutions such as Historically Black Colleges and Universities (HBCU), Tribal Colleges...

  7. Distributed Deliberative Recommender Systems

    Science.gov (United States)

    Recio-García, Juan A.; Díaz-Agudo, Belén; González-Sanz, Sergio; Sanchez, Lara Quijano

    Case-Based Reasoning (CBR) is one of most successful applied AI technologies of recent years. Although many CBR systems reason locally on a previous experience base to solve new problems, in this paper we focus on distributed retrieval processes working on a network of collaborating CBR systems. In such systems, each node in a network of CBR agents collaborates, arguments and counterarguments its local results with other nodes to improve the performance of the system's global response. We describe D2ISCO: a framework to design and implement deliberative and collaborative CBR systems that is integrated as a part of jcolibritwo an established framework in the CBR community. We apply D2ISCO to one particular simplified type of CBR systems: recommender systems. We perform a first case study for a collaborative music recommender system and present the results of an experiment of the accuracy of the system results using a fuzzy version of the argumentation system AMAL and a network topology based on a social network. Besides individual recommendation we also discuss how D2ISCO can be used to improve recommendations to groups and we present a second case of study based on the movie recommendation domain with heterogeneous groups according to the group personality composition and a group topology based on a social network.

  8. Software Application for Supporting the Education of Database Systems

    Science.gov (United States)

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  9. Database Management Systems: New Homes for Migrating Bibliographic Records.

    Science.gov (United States)

    Brooks, Terrence A.; Bierbaum, Esther G.

    1987-01-01

    Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…

  10. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  11. Distributed System Design Checklist

    Science.gov (United States)

    Hall, Brendan; Driscoll, Kevin

    2014-01-01

    This report describes a design checklist targeted to fault-tolerant distributed electronic systems. Many of the questions and discussions in this checklist may be generally applicable to the development of any safety-critical system. However, the primary focus of this report covers the issues relating to distributed electronic system design. The questions that comprise this design checklist were created with the intent to stimulate system designers' thought processes in a way that hopefully helps them to establish a broader perspective from which they can assess the system's dependability and fault-tolerance mechanisms. While best effort was expended to make this checklist as comprehensive as possible, it is not (and cannot be) complete. Instead, we expect that this list of questions and the associated rationale for the questions will continue to evolve as lessons are learned and further knowledge is established. In this regard, it is our intent to post the questions of this checklist on a suitable public web-forum, such as the NASA DASHLink AFCS repository. From there, we hope that it can be updated, extended, and maintained after our initial research has been completed.

  12. Distributed security in closed distributed systems

    DEFF Research Database (Denmark)

    Hernandez, Alejandro Mario

    in their design. There should always exist techniques for ensuring that the required security properties are met. This has been thoroughly investigated through the years, and many varied methodologies have come through. In the case of distributed systems, there are even harder issues to deal with. Many approaches...... have been taken towards solving security problems, yet many questions remain unanswered. Most of these problems are related to some of the following facts: distributed systems do not usually have any central controller providing security to the entire system; the system heterogeneity is usually...... reflected in heterogeneous security aims; the software life cycle entails evolution and this includes security expectations; the distribution is useful if the entire system is “open” to new (a priori unknown) interactions; the distribution itself poses intrinsically more complex security-related problems...

  13. An Adaptive Database Intrusion Detection System

    Science.gov (United States)

    Barrios, Rita M.

    2011-01-01

    Intrusion detection is difficult to accomplish when attempting to employ current methodologies when considering the database and the authorized entity. It is a common understanding that current methodologies focus on the network architecture rather than the database, which is not an adequate solution when considering the insider threat. Recent…

  14. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  15. CONQuEST - Menu-selectable database system

    Energy Technology Data Exchange (ETDEWEB)

    Yeko, J.D. (Illinois State Geological Survey, Champaign (USA))

    1989-08-01

    The well database unit of the Illinois State Geological Survey Oil and Gas section began to design and develop a technically advanced oil and gas database system in 1988. The CONQuEST system integrates and replaces the existing oil and gas, water, coal, and geotechnical database systems. CONQuEST uses a distributed relational data model that allows integrated storage and retrieval of different data and well types in an almost unlimited variety of report forms. The software, written in C, consists of five menu-selectable modules that allow a novice computer user to enter, edit, retrieve, and report data. The GeoDES module is used to enter data from paper records and consists of numerous fill-in-the-blank screens. The TIDE module is used to edit or delete any existing data. There are two modules for data retrieval. QuARTz is used for quick, preplanned retrievals and ToPAz is used for self-designed retrievals. ToPAz retrieval designs may be saved and added to the menu systems and then accessed through the QuARTz module. The COReS module consists of numerous predesigned report options. Standard Query Language (SQL) is also available as an option. CONQuEST is currently run on a Digital Equipment Corporation VAX, that is interfaced to a series of PCs; however, all software can be run on a PC only. Benefits of the CONQuEST system over the previous system are increased speed, greater flexibility, the ability to run on a PC, and the menu system that allows for successful access and use of the data by novice computer users. Data system use is available to the general public for a fee.

  16. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  17. Web-based material property database system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, W. K.; Huh, Y. H.; Moon, H. G. [Korea Research Institute of Standards and Science, Taejon (Korea, Republic of)

    2000-07-01

    This is to describe about power installations established by Korea Research Institute of Standards and Science and about the contents and function of database on creep and fatigue of high temperature resistance steel used in petrolium chemical plant. The database can be searched through commercial web browser and can also be available by plotting the relationship between collection of data at different temperature of material's creep rupture, creep deformation, creep crack growth, low cycle fatigue, high cycle fatigue, and fatigue crack growth and database. (Hong, J. S.)

  18. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  19. Data Mining in Distributed Database of the First Egyptian Thermal Research Reactor (ETRR-1)

    International Nuclear Information System (INIS)

    Abo Elez, R.H.; Ayad, N.M.A.; Ghuname, A.A.A.

    2006-01-01

    Distributed database (DDB)technology application systems are growing up to cover many fields an domains, and at different levels. the aim of this paper is to shade some lights on applying the new technology of distributed database on the ETRR-1 operation data logged by the data acquisition system (DACQUS)and one can extract a useful knowledge. data mining with scientific methods and specialize tools is used to support the extraction of useful knowledge from the rapidly growing volumes of data . there are many shapes and forms for data mining methods. predictive methods furnish models capable of anticipating the future behavior of quantitative or qualitative database variables. when the relationship between the dependent an independent variables is nearly liner, linear regression method is the appropriate data mining strategy. so, multiple linear regression models have been applied to a set of data samples of the ETRR-1 operation data, using least square method. the results show an accurate analysis of the multiple linear regression models as applied to the ETRR-1 operation data

  20. Communication Facilities for Distributed Systems

    Directory of Open Access Journals (Sweden)

    V. Barladeanu

    1997-01-01

    Full Text Available The design of physical networks and communication protocols in Distributed Systems can have a direct impact on system efficiency and reliability. This paper tries to identify efficient mechanisms and paradigms for communication in distributed systems.

  1. Distributed Systems Technology Survey.

    Science.gov (United States)

    1987-03-01

    A-0101 953 DISTRIBUTED SYSTEMS TECHNOLOGY SURYEY(U) / CRNEGIE-MELLON UNIY PITTSBURGH PA SOFTWdARE ENGINEERING INST E C COOPER MAR 97 CMU/SEI-87-TR-5...generalization of single-lev atomic transactions, in order to allow them to mesh properly with the concepts of composiion and abstraction supported by program...WORK UNtT PITTSBURGH, PA 15213 ELEMENT NO. NO. No. NO. _______________________________ 63752F N/A N/A N/A 11. TITIE (include Security- Classiiction

  2. Database Performance Monitoring for the Photovoltaic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website. To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.

  3. Detecting data anomalies methods in distributed systems

    Science.gov (United States)

    Mosiej, Lukasz

    2009-06-01

    Distributed systems became most popular systems in big companies. Nowadays many telecommunications companies want to hold large volumes of data about all customers. Obviously, those data cannot be stored in single database because of many technical difficulties, such as data access efficiency, security reasons, etc. On the other hand there is no need to hold all data in one place, because companies already have dedicated systems to perform specific tasks. In the distributed systems there is a redundancy of data and each system holds only interesting data in appropriate form. Data updated in one system should be also updated in the rest of systems, which hold that data. There are technical problems to update those data in all systems in transactional way. This article is about data anomalies in distributed systems. Avail data anomalies detection methods are shown. Furthermore, a new initial concept of new data anomalies detection methods is described on the last section.

  4. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  5. Report of the SRC working party on databases and database management systems

    International Nuclear Information System (INIS)

    Crennell, K.M.

    1980-10-01

    An SRC working party, set up to consider the subject of support for databases within the SRC, were asked to identify interested individuals and user communities, establish which features of database management systems they felt were desirable, arrange demonstrations of possible systems and then make recommendations for systems, funding and likely manpower requirements. This report describes the activities and lists the recommendations of the working party and contains a list of databses maintained or proposed by those who replied to a questionnaire. (author)

  6. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  7. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...... of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...

  8. Quality monitored distributed voting system

    Science.gov (United States)

    Skogmo, David

    1997-01-01

    A quality monitoring system can detect certain system faults and fraud attempts in a distributed voting system. The system uses decoy voters to cast predetermined check ballots. Absent check ballots can indicate system faults. Altered check ballots can indicate attempts at counterfeiting votes. The system can also cast check ballots at predetermined times to provide another check on the distributed voting system.

  9. Portuguese food composition database quality management system.

    Science.gov (United States)

    Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A

    2010-11-01

    The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers

  10. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  11. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  12. Teaching Database Management System Use in a Library School Curriculum.

    Science.gov (United States)

    Cooper, Michael D.

    1985-01-01

    Description of database management systems course being taught to students at School of Library and Information Studies, University of California, Berkeley, notes course structure, assignments, and course evaluation. Approaches to teaching concepts of three types of database systems are discussed and systems used by students in the course are…

  13. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    Azure [17], Amazon EC2 [18], Rackspace, or others. The user can either upload or utilize a virtual machine template image for their database or they...2015). What is Microsoft Azure [Online]. Available: http://azure.microsoft.com/en-us/overview/what-is- azure / [18] Amazon (2015). Amazon EC2 [Online

  14. Fossil-Fuel C02 Emissions Database and Exploration System

    Science.gov (United States)

    Krassovski, M.; Boden, T.; Andres, R. J.; Blasing, T. J.

    2012-12-01

    tabular, national, mass-emissions data and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). This presentation introduces newly build database and web interface, reflects the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System as well as future plans for expansion.

  15. Natural Language Interfaces to Database Systems

    Science.gov (United States)

    1988-10-01

    them. They know something the human doesn’t know, or he once knew and forgot . Humans have limited ability to store vast specific detail about entities...Towards More Effective Human-Database Interaction Bibliography 3 ID# Document 58 David W Knapp and Alice Parker, University of Southern California, "A...Presenting Information," Computer Corporation of America, Feb 1032. 886 Alice Y. Chamis, Tefko Saracevic, Donna Trivison, Case Western Reserve, "Research on

  16. Developing of impact and fatigue property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, D. H.; Ryu, W. S.

    2003-01-01

    The impact and fatigue characteristics database systems were constructed using the data produced from impact and fatigue test and designed to hold in common the data and programs of tensile characteristics database that was constructed on 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the impact and fatigue characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  17. Developing of corrosion and creep property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, J. S.; Ryu, W. S.

    2004-01-01

    The corrosion and creep characteristics database systems were constructed using the data produced from corrosion and creep test and designed to hold in common the data and programs of tensile, impact, fatigue characteristics database that was constructed since 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the corrosion and creep characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous test result. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  18. Distribution System Pricing with Distributed Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Hledik, Ryan [The Brattle Group, Cambridge, MA (United States); Lazar, Jim [The Regulatory Assistance Project, Montpelier, VT (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-08-16

    Technological changes in the electric utility industry bring tremendous opportunities and significant challenges. Customers are installing clean sources of on-site generation such as rooftop solar photovoltaic (PV) systems. At the same time, smart appliances and control systems that can communicate with the grid are entering the retail market. Among the opportunities these changes create are a cleaner and more diverse power system, the ability to improve system reliability and system resilience, and the potential for lower total costs. Challenges include integrating these new resources in a way that maintains system reliability, provides an equitable sharing of system costs, and avoids unbalanced impacts on different groups of customers, including those who install distributed energy resources (DERs) and low-income households who may be the least able to afford the transition.

  19. New database for improving virtual system “body-dress”

    Science.gov (United States)

    Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.

    2017-10-01

    The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.

  20. Enabling On-Demand Database Computing with MIT SuperCloud Database Management System

    Science.gov (United States)

    2015-09-15

    databases: Postgresql (SQL), Accumulo (NoSQL), and SciDB (NewSQL). In addition, a common interface is provided to these different databases via...cluster has its own data storage. It uses the disks in the node directly for storage of its data. It also uses PostgreSQL for its System Catalog for...storage of metadata, such as table definitions and installed plug-ins. An operational PostgreSQL database is a prerequisite to running SciDB. C. Lustre

  1. A user's manual for managing database system of tensile property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kim, D. H.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the tensile database system for managing the tensile property test data. The data base constructed the data produced from tensile property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The tensile database system was developed by internet method using Java, PL/SQL, JSP(Java Server Pages) tool

  2. Formalization of Database Systems -- and a Formal Definition of {IMS}

    DEFF Research Database (Denmark)

    Bjørner, Dines; Løvengreen, Hans Henrik

    1982-01-01

    Drawing upon an analogy between Programming Language Systems and Database Systems we outline the requirements that architectural specifications of database systems must futfitl, and argue that only formal, mathematical definitions may 6atisfy these. Then we illustrate home aspects and touch upon...... come ueee of formal definitions of data models and databaee management systems. A formal model of INS will carry this discussion. Finally we survey some of the exkting literature on formal definitions of database systems. The emphasis will be on constructive definitions in the denotationul semantics...... style of the VCM: Vienna Development Nethd. The role of formal definitions in international standardiaation efforts is briefly mentioned....

  3. Formalization of Database Systems -- and a Formal Definition of {IMS}

    DEFF Research Database (Denmark)

    Bjørner, Dines; Løvengreen, Hans Henrik

    1982-01-01

    come ueee of formal definitions of data models and databaee management systems. A formal model of INS will carry this discussion. Finally we survey some of the exkting literature on formal definitions of database systems. The emphasis will be on constructive definitions in the denotationul semantics......Drawing upon an analogy between Programming Language Systems and Database Systems we outline the requirements that architectural specifications of database systems must futfitl, and argue that only formal, mathematical definitions may 6atisfy these. Then we illustrate home aspects and touch upon...... style of the VCM: Vienna Development Nethd. The role of formal definitions in international standardiaation efforts is briefly mentioned....

  4. plantsUPS: a database of plants' Ubiquitin Proteasome System

    Directory of Open Access Journals (Sweden)

    Su Zhen

    2009-05-01

    Full Text Available Abstract Background The ubiquitin 26S/proteasome system (UPS, a serial cascade process of protein ubiquitination and degradation, is the last step for most cellular proteins. There are many genes involved in this system, but are not identified in many species. The accumulating availability of genomic sequence data is generating more demands in data management and analysis. Genomics data of plants such as Populus trichocarpa, Medicago truncatula, Glycine max and others are now publicly accessible. It is time to integrate information on classes of genes for complex protein systems such as UPS. Results We developed a database of higher plants' UPS, named 'plantsUPS'. Both automated search and manual curation were performed in identifying candidate genes. Extensive annotations referring to each gene were generated, including basic gene characterization, protein features, GO (gene ontology assignment, microarray probe set annotation and expression data, as well as cross-links among different organisms. A chromosome distribution map, multi-sequence alignment, and phylogenetic trees for each species or gene family were also created. A user-friendly web interface and regular updates make plantsUPS valuable to researchers in related fields. Conclusion The plantsUPS enables the exploration and comparative analysis of UPS in higher plants. It now archives > 8000 genes from seven plant species distributed in 11 UPS-involved gene families. The plantsUPS is freely available now to all users at http://bioinformatics.cau.edu.cn/plantsUPS.

  5. TRENDS: The aeronautical post-test database management system

    Science.gov (United States)

    Bjorkman, W. S.; Bondi, M. J.

    1990-01-01

    TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.

  6. Online-Expert: An Expert System for Online Database Selection.

    Science.gov (United States)

    Zahir, Sajjad; Chang, Chew Lik

    1992-01-01

    Describes the design and development of a prototype expert system called ONLINE-EXPERT that helps users select online databases and vendors that meet users' needs. Search strategies are discussed; knowledge acquisition and knowledge bases are described; and the Analytic Hierarchy Process (AHP), a decision analysis technique that ranks databases,…

  7. A59 Drum Activity database (DRUMAC): system documentation

    International Nuclear Information System (INIS)

    Keel, Alan.

    1993-01-01

    This paper sets out the requirements, database design, software module designs and test plans for DRUMAC (the Active handling Building Drum Activity Database) - a computer-based system to record the radiological inventory for LLW/ILW drums dispatched from the Active Handling Building. (author)

  8. System factors influencing utilisation of Research4Life databases by ...

    African Journals Online (AJOL)

    This is a comprehensive investigation of the influence of system factors on utilisation of Research4Life databases. It is part of a doctoral dissertation. Research4Life databases are new innovative technologies being investigated in a new context – utilisation by NARIs scientists for research. The study adopted the descriptive ...

  9. An Architecture for Nested Transaction Support on Standard Database Systems

    NARCIS (Netherlands)

    Boertjes, E.M.; Grefen, P.W.P.J.; Vonk, J.; Apers, Peter M.G.

    Many applications dealing with complex processes require database support for nested transactions. Current commercial database systems lack this kind of support, offering flat, non-nested transactions only. This paper presents a three-layer architecture for implementing nested transaction support on

  10. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  11. A Grid Architecture for Manufacturing Database System

    Directory of Open Access Journals (Sweden)

    Laurentiu CIOVICĂ

    2011-06-01

    Full Text Available Before the Enterprise Resource Planning concepts business functions within enterprises were supported by small and isolated applications, most of them developed internally. Yet today ERP platforms are not by themselves the answer to all organizations needs especially in times of differentiated and diversified demands among end customers. ERP platforms were integrated with specialized systems for the management of clients, Customer Relationship Management and vendors, Supplier Relationship Management. They were integrated with Manufacturing Execution Systems for better planning and control of production lines. In order to offer real time, efficient answers to the management level, ERP systems were integrated with Business Intelligence systems. This paper analyses the advantages of grid computing at this level of integration, communication and interoperability between complex specialized informatics systems with a focus on the system architecture and data base systems.

  12. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  13. Energy optimization of water distribution system

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

  14. Development of the interconnectivity and enhancement (ICE) module in the Virginia Department of Transportation's Geotechnical Database Management System Framework.

    Science.gov (United States)

    2007-01-01

    An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was implemented at the Virginia Department of Transportation (VDOT) in 2002 to manage geotechnical data using a distributed Geographical Information System (G...

  15. A Multiagent System for Distributed Systems Management

    OpenAIRE

    H. M. Kelash; H. M. Faheem; M. Amoon

    2007-01-01

    The demand for autonomous resource management for distributed systems has increased in recent years. Distributed systems require an efficient and powerful communication mechanism between applications running on different hosts and networks. The use of mobile agent technology to distribute and delegate management tasks promises to overcome the scalability and flexibility limitations of the currently used centralized management approach. This work proposes a multiagent s...

  16. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  17. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Science.gov (United States)

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable. PMID:22399959

  18. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Directory of Open Access Journals (Sweden)

    Xiaohuan Yang

    2009-02-01

    Full Text Available The spatial distribution of population is closely related to land use and land cover (LULC patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B data integrated with a Pattern Decomposition Method (PDM and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM. The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  19. Distributed systems status and control

    Science.gov (United States)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  20. Structured Query Translation in Peer to Peer Database Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mehedi Masud

    2009-10-01

    Full Text Available This paper presents a query translation mechanism between heterogeneous peers in Peer to Peer Database Sharing Systems (PDSSs. A PDSS combines a database management system with P2P functionalities. The local databases on peers are called peer databases. In a PDSS, each peer chooses its own data model and schema and maintains data independently without any global coordinator. One of the problems in such a system is translating queries between peers, taking into account both the schema and data heterogeneity. Query translation is the problem of rewriting a query posed in terms of one peer schema to a query in terms of another peer schema. This paper proposes a query translation mechanism between peers where peers are acquainted in data sharing systems through data-level mappings for sharing data.

  1. Dynamic Action Scheduling in a Parallel Database System

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Apers, Peter M.G.

    This paper describes a scheduling technique for parallel database systems to obtain high performance, both in terms of response time and throughput. The technique enables both intra- and inter-transaction parallelism while controlling concurrency between transactions correctly. Scheduling is

  2. Improving Recall Using Database Management Systems: A Learning Strategy.

    Science.gov (United States)

    Jonassen, David H.

    1986-01-01

    Describes the use of microcomputer database management systems to facilitate the instructional uses of learning strategies relating to information processing skills, especially recall. Two learning strategies, cross-classification matrixing and node acquisition and integration, are highlighted. (Author/LRW)

  3. Development and trial of the drug interaction database system

    Directory of Open Access Journals (Sweden)

    Virasakdi Chongsuvivatwong

    2003-07-01

    Full Text Available The drug interaction database system was originally developed at Songklanagarind Hospital. Data sets of drugs available in Songklanagarind Hospital comprising standard drug names, trade names, group names, and drug interactions were set up using Microsoft® Access 2000. The computer used was a Pentium III processor running at 450 MHz with 128 MB SDRAM operated by Microsoft® Windows 98. A robust structured query language algorithm was chosen for detecting interactions. The functioning of this database system, including speed and accuracy of detection, was tested at Songklanagarind Hospital and Naratiwatrachanagarind Hospital using hypothetical prescriptions. Its use in determining the incidence of drug interactions was also evaluated using a retrospective prescription data file. This study has shown that the database system correctly detected drug interactions from prescriptions. Speed of detection was approximately 1 to 2 seconds depending on the size of prescription. The database system was of benefit in determining of incidence rate of drug interaction in a hospital.

  4. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  5. Development of a Relational Database for Learning Management Systems

    Science.gov (United States)

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  6. An extended database system for handling spatiotemporal data

    Science.gov (United States)

    Yi, Baolin

    2005-11-01

    With the rapid development of geographic information system (GIS), computer aided design (CAD), mobile computing and multimedia databases, spatiotemporal databases have been the focus of considerable research activities over a significant period. It becomes an enabling technology for important applications such as land use, real estate, transportation, environmental information systems, energy resource et al. In this paper we address research issues in spatiotemporal databases. We first proposed an integrated spatiotemporal data model, and then we provided novel threetier architecture for implementation where Meta tables to handle spatiotemporal data were extended. Finally, experiments confirm the effectiveness of our techniques under realistic settings.

  7. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  8. The real-time roll-back and recovery of transactions in database systems

    OpenAIRE

    Quantock, David E.

    1989-01-01

    Approved for public release; distribution is unlimited. A modern database transaction may involve a long series of updates, deletions, and insertions of data and a complex mix of these primary database operations. Due to its length and complexity, the transaction requires back-up and recovery procedures. The back-up procedure allows the user to either commit or abort a lengthy and complex transaction without comprising the integrity of the data. The recovery procedure allows the system to ...

  9. Water Treatment Technology - Distribution Systems.

    Science.gov (United States)

    Ross-Harrington, Melinda; Kincaid, G. David

    One of twelve water treatment technology units, this student manual on distribution systems provides instructional materials for six competencies. (The twelve units are designed for a continuing education training course for public water supply operators.) The competencies focus on the following areas: types of pipe for distribution systems, types…

  10. A Functional Framework for Database Management Systems.

    Science.gov (United States)

    1980-02-01

    foatures oricinated earlier, but came to truition in the CCDASYL systems committee work rCCDASYL 1Q6s, 19,1’. This work was an attempt at learning the...each o t her v ia functicns. Ob je c ts can te reaLizec only throtgh functicns and functions have no meanin . with out objects. In the algetraic

  11. A Porting Methodology for Parallel Database Systems

    Science.gov (United States)

    1993-09-01

    client-server model [Leffler and Fabry and Joy and Lapsley and Miller and Torek, 1987, p. PSI: 8-2 - 8-10]. The client makes the system call, socketO...Joy, William N. and Lapsley , Phil, "An Advanced 4.3BSD Interprocess Communication Tutorial," Integrated Solutions UNIX Programmers Supplementary

  12. Distribution of the Object Oriented Databases. A Viewpoint of the MVDB Model's Methodology and Architecture

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2008-01-01

    Full Text Available In databases, much work has been done towards extending models with advanced tools such as view technology, schema evolution support, multiple classification, role modeling and viewpoints. Over the past years, most of the research dealing with the object multiple representation and evolution has proposed to enrich the monolithic vision of the classical object approach in which an object belongs to one hierarchy class. In particular, the integration of the viewpoint mechanism to the conventional object-oriented data model gives it flexibility and allows one to improve the modeling power of objects. The viewpoint paradigm refers to the multiple descriptions, the distribution, and the evolution of object. Also, it can be an undeniable contribution for a distributed design of complex databases. The motivation of this paper is to define an object data model integrating viewpoints in databases and to present a federated database architecture integrating multiple viewpoint sources following a local-as-extended-view data integration approach.

  13. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  14. A survey of commercial object-oriented database management systems

    Science.gov (United States)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  15. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  16. Electrical Distribution System Functional Inspection (EDSFI) data base program

    International Nuclear Information System (INIS)

    Gautam, A.

    1993-01-01

    This document describes the organization, installation procedures, and operating instructions for the database computer program containing inspection findings from the US Nuclear Regulatory Commission's (NRC's) Electrical Distribution System Functional Inspections (EDSFIs). The program enables the user to search and sort findings, ascertain trends, and obtain printed reports of the findings. The findings include observations, unresolved issues, or possible deficiencies in the design and implementation of electrical distribution systems in nuclear plants. This database will assist those preparing for electrical inspections, searching for deficiencies in a plant, and determining the corrective actions previously taken for similar deficiencies. This database will be updated as new EDSFIs are completed

  17. World Ocean Database as a dissemination tool for distributed quality controlled ocean profile data

    Science.gov (United States)

    Reagan, J. R.; Boyer, T.; Locarnini, R. A.; Zweng, M.; Paver, C.; Smolyar, I.; Garcia, H. E.; Baranova, O.

    2016-02-01

    The World Ocean Database (WOD) is the largest publicly available uniform format quality controlled database for ocean profile data (temperature, salinity, oxygen, nutrients, carbon variables, biological variables). The WOD is a basis for many oceanographic and climate studies. Climate studies in particular are dependent on high quality data to separate climate change signal from noise. With over 14 million ocean profiles from ship based and autonomous instruments and growing, the task of identifying high quality measurements and applying automatic and manual quality control within WOD is large and ongoing. The International Quality Controlled Oceanographic Database (IQuOD) project aims to make publicly available an internationally agreed upon set of temperature (and eventually salinity) profile data with quality control suitable for climate studies. The IQuOD project will subject data within the WOD (and newly acquired historical data) to rigorous expert quality control following agreed upon standards. The IQuOD dataset will be available through the WOD distribution system with its own set of IQuOD quality flags. Original values, bias corrections, and instrument based uncertainties will be included. This will allow a researcher to request and download a standardized quality dataset for climate research. This will relieve the researcher of the need to perform basic quality control and will allow for comparison of results whereby data and quality control will be a constant rather than a variable. How and in what form IQuOD will be disseminated through WOD and the relation of IQuOD to the overall WOD will be discussed.

  18. Database design for Physical Access Control System for nuclear facilities

    International Nuclear Information System (INIS)

    Sathishkumar, T.; Rao, G. Prabhakara; Arumugam, P.

    2016-01-01

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  19. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  20. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  1. Database system for analysing and managing coiled tubing drilling data

    Science.gov (United States)

    Suh, J.; Choi, Y.; Park, H.; Choe, J.

    2009-05-01

    This study present a prototype of database system for analysing and managing petrophysical data from coiled tubing drilling in the oil and gas industry. The characteristics of coiled tubing drilling data from cores were analyzed and categorized according to the whole drilling process and data modeling including object relation diagram, class diagram was carried out to design the schema of effective database system such as the relationships between tables and key index fields to create the relationships. The database system called DrillerGeoDB consists of 22 tables and those are classified with 4 groups such as project information, stratum information, drilling/logging information and operation evaluate information. DrillerGeoDB provide all sort of results of each process with a spreadsheet such as MS-Excel via application of various algorithm of logging theory and statistics function of cost evaluation. This presentation describes the details of the system development and implementation.

  2. An Expert System Helps Students Learn Database Design

    Science.gov (United States)

    Post, Gerald V.; Whisenand, Thomas G.

    2005-01-01

    Teaching and learning database design is difficult for both instructors and students. Students need to solve many problems with feedback and corrections. A Web-based specialized expert system was created to enable students to create designs online and receive immediate feedback. An experiment testing the system shows that it significantly enhances…

  3. Multiresource inventories incorporating GIS, GPS, and database management systems

    Science.gov (United States)

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  4. Lbase the database system of the Rijksherbarium/Hortus Botanicus

    NARCIS (Netherlands)

    Welzen, van P.C.; Valen, van R.P.; Valkenburg, J.A.

    1992-01-01

    The database system which is presently implemented in the Rijksherbarium/Hortus Botanicus, called LBASE, will also serve as the future databank for the Flora Malesiana project. The system will contain information about specimens, persons, literature, taxa, and nomenclature. LBASE can serve many

  5. Integration of Information Retrieval and Database Management Systems.

    Science.gov (United States)

    Deogun, Jitender S.; Raghavan, Vijay V.

    1988-01-01

    Discusses the motivation for integrating information retrieval and database management systems, and proposes a probabilistic retrieval model in which records in a file may be composed of attributes (formatted data items) and descriptors (content indicators). The details and resolutions of difficulties involved in integrating such systems are…

  6. A Novel Database Design for Student Information System

    OpenAIRE

    Noraziah Ahmad; Nawsher Khan; Ahmed N.A. Alla; Abul H. Beg

    2010-01-01

    Problem statement: A new system designed, where necessary and alternative solutions given to solve the different problems and the most feasible solution were selected. Approach: This study presents the database design for student information system. Computerization of a system means to change it from a manual to a computer-based, system to automate the work and to provide efficiency, accuracy, timelessness, security and economy. Results: After undertaking an in-depth examination of the Ayub M...

  7. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  8. Database management in the new GANIL control system

    International Nuclear Information System (INIS)

    Lecorche, E.; Lermine, P.

    1993-01-01

    At the start of the new control system design, decision was made to manage the huge amount of data by means of a database management system. The first implementations built on the INGRES relational database are described. Real time and data management domains are shown, and problems induced by Ada/SQL interfacing are briefly discussed. Database management concerns the whole hardware and software configuration for the GANIL pieces of equipment and the alarm system either for the alarm configuration or for the alarm logs. An other field of application encompasses the beam parameter archiving as a function of the various kinds of beams accelerated at GANIL (ion species, energies, charge states). (author) 3 refs., 4 figs

  9. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  10. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  11. A distributed clinical decision support system architecture

    Directory of Open Access Journals (Sweden)

    Shaker H. El-Sappagh

    2014-01-01

    Full Text Available This paper proposes an open and distributed clinical decision support system architecture. This technical architecture takes advantage of Electronic Health Record (EHR, data mining techniques, clinical databases, domain expert knowledge bases, available technologies and standards to provide decision-making support for healthcare professionals. The architecture will work extremely well in distributed EHR environments in which each hospital has its own local EHR, and it satisfies the compatibility, interoperability and scalability objectives of an EHR. The system will also have a set of distributed knowledge bases. Each knowledge base will be specialized in a specific domain (i.e., heart disease, and the model achieves cooperation, integration and interoperability between these knowledge bases. Moreover, the model ensures that all knowledge bases are up-to-date by connecting data mining engines to each local knowledge base. These data mining engines continuously mine EHR databases to extract the most recent knowledge, to standardize it and to add it to the knowledge bases. This framework is expected to improve the quality of healthcare, reducing medical errors and guaranteeing the safety of patients by helping clinicians to make correct, accurate, knowledgeable and timely decisions.

  12. Coordination control of distributed systems

    CERN Document Server

    Villa, Tiziano

    2015-01-01

    This book describes how control of distributed systems can be advanced by an integration of control, communication, and computation. The global control objectives are met by judicious combinations of local and nonlocal observations taking advantage of various forms of communication exchanges between distributed controllers. Control architectures are considered according to  increasing degrees of cooperation of local controllers:  fully distributed or decentralized control,  control with communication between controllers,  coordination control, and multilevel control.  The book covers also topics bridging computer science, communication, and control, like communication for control of networks, average consensus for distributed systems, and modeling and verification of discrete and of hybrid systems. Examples and case studies are introduced in the first part of the text and developed throughout the book. They include: control of underwater vehicles, automated-guided vehicles on a container terminal, contro...

  13. A data-base system for seepage characteristics of geomaterials

    Energy Technology Data Exchange (ETDEWEB)

    Aydan, O. [Tokai University, Shimizu (Japan)

    1998-07-01

    Fluid flow through geomaterials is of great concern as it plays an important role in many engineering applications such as dam, underground storage of oil, water, nuclear waste disposals etc. Therefore, information on seepage characteristics of geomaterials is necessary for assessing fluid transport and its mechanical effect. In this paper, the author describes an integrated data-base system for seepage characteristics of geomaterials. This data-base system is used to study interrelations between permeability and pore diameter, porosity, discontinuity aperture, RQD and confining pressure as well as to check the validity of theoretical relations. 5 refs.

  14. Thermodynamic database for the Co-Pr system

    Directory of Open Access Journals (Sweden)

    S.H. Zhou

    2016-03-01

    Full Text Available In this article, we describe data on (1 compositions for both as-cast and heat treated specimens were summarized in Table 1; (2 the determined enthalpy of mixing of liquid phase is listed in Table 2; (3 thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W. Keywords: Thermodynamic database of Co-Pr, Solution calorimeter measurement, Phase diagram Co-Pr

  15. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  16. RBAC administration in distributed systems

    NARCIS (Netherlands)

    Dekker, M.A.C.; Crampton, J.; Etalle, Sandro; Li, N.

    Large and distributed access control systems are increasingly common, for example in health care. In such settings, access control policies may become very complex, thus complicating correct and efficient adminstration of the access control system. Despite being one of the most widely used access

  17. Automated Power-Distribution System

    Science.gov (United States)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  18. Database versioning and its implementation in geoscience information systems

    Science.gov (United States)

    Le, Hai Ha; Schaeben, Helmut; Jasper, Heinrich; Görz, Ines

    2014-09-01

    Many different versions of geoscience data concurrently exist in a database for different geological paradigms, source data, and authors. The aim of this study is to manage these versions in a database management system. Our data include geological surfaces, which are triangulated meshes in this study. Unlike revision/version/source control systems, our data are stored in a central database without local copies. The main contributions of this study include (1) a data model with input/output/manage functions, (2) a mesh comparison function, (3) a version merging strategy, and (4) the implementation of all of the concepts in PostgreSQL and gOcad. The software has been tested using synthetic surfaces and a simple tectonic model of a deformed stratigraphic horizon.

  19. CEBAF Distributed Data Acquisition System

    CERN Document Server

    Allison, Trent

    2005-01-01

    There are thousands of signals distributed throughout Jefferson Lab's Continuous Electron Beam Accelerator Facility (CEBAF) that are useful for troubleshooting and identifying instabilities. Many of these signals are only available locally or monitored by systems with small bandwidths that cannot identify fast transients. The Distributed Data Acquisition (Dist DAQ) system will sample and record these signals simultaneously at rates up to 40 Msps. Its primary function will be to provide waveform records from signals throughout CEBAF to the Experimental Physics and Industrial Control System (EPICS). The waveforms will be collected after the occurrence of an event trigger. These triggers will be derived from signals such as periodic timers or accelerator faults. The waveform data can then be processed to quickly identify beam transport issues, thus reducing down time and increasing CEBAF performance. The Dist DAQ system will be comprised of multiple standalone chassis distributed throughout CEBAF. They will be i...

  20. Systems Measures of Water Distribution System Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murray, Regan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Walker, La Tonya Nicole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Resilience is a concept that is being used increasingly to refer to the capacity of infrastructure systems to be prepared for and able to respond effectively and rapidly to hazardous events. In Section 2 of this report, drinking water hazards, resilience literature, and available resilience tools are presented. Broader definitions, attributes and methods for measuring resilience are presented in Section 3. In Section 4, quantitative systems performance measures for water distribution systems are presented. Finally, in Section 5, the performance measures and their relevance to measuring the resilience of water systems to hazards is discussed along with needed improvements to water distribution system modeling tools.

  1. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  2. Design of database management system for 60Co container inspection system

    International Nuclear Information System (INIS)

    Liu Jinhui; Wu Zhifang

    2007-01-01

    The function of the database management system has been designed according to the features of cobalt-60 container inspection system. And the software related to the function has been constructed. The database querying and searching are included in the software. The database operation program is constructed based on Microsoft SQL server and Visual C ++ under Windows 2000. The software realizes database querying, image and graph displaying, statistic, report form and its printing, interface designing, etc. The software is powerful and flexible for operation and information querying. And it has been successfully used in the real database management system of cobalt-60 container inspection system. (authors)

  3. Should the Air Force Personnel Data System Use Database Machines?

    Science.gov (United States)

    1987-04-01

    concludes that the machine, the Teradata DBC/1012, has the potential to improve the performance of the PDS, and should be considered for purchase. 29...20 TABLE 3--Prices of Teradata DBC/1012 Systems ......................... 20 FI6URES FI6URE 1--Hierarchical Database Model...10 FIGURE 4-- Teradata DBC/1012 Confiouration ............................ 16 FIGURE 5-- Teradata DBC/1012 Performance

  4. Integrity Control in Relational Database Systems - An Overview

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Apers, Peter M.G.

    This paper gives an overview of research regarding integrity control or integrity constraint handling in relational database management systems. The topic of constraint handling is discussed from two points of view. First, constraint handling is discussed by identifying a number of important

  5. Windshear database for forward-looking systems certification

    Science.gov (United States)

    Switzer, G. F.; Proctor, F. H.; Hinton, D. A.; Aanstoos, J. V.

    1993-01-01

    This document contains a description of a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-look windshear detection equipment. The database contains high resolution, three dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies which have been generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases represent a wide spectrum of windshear events. General descriptions and figures from each of the case studies are included, as well as equations for F-factor, radar-reflectivity factor, and rainfall rate. The document also describes scenarios and paths through the data sets, jointly developed by NASA and the FAA, to meet FAA certification testing objectives. Instructions for reading and verifying the data from tape are included.

  6. Museum Information System of Serbia recent approach to database modeling

    OpenAIRE

    Gavrilović, Goran

    2007-01-01

    The paper offers an illustration of the main parameters for museum database projection (case study of Integrated Museum Information System of Serbia). The simple case of museum data model development and implementation was described. The main aim is to present the advantages of ORM (Object Role Modeling) methodology by using Microsoft Visio as an eligible programmed support in formalization of museum business rules.

  7. Use of the South African Food Composition Database System ...

    African Journals Online (AJOL)

    Use of the South African Food Composition Database System (SAFOODS) and its products in assessing dietary intake data: Part II. ... It also enables the user to export the data to MS Excel for further analysis and for importing the data into other statistical packages. Coding for the type and quantity of food consumed is ...

  8. Maintaining consistency in distributed systems

    Science.gov (United States)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  9. Distribution network strengthens sales systems

    International Nuclear Information System (INIS)

    Janoska, J.

    2003-01-01

    Liberalisation of the electricity market pushes Slovak distribution companies to upgrade their sale technologies. The first one to invest into a complex electronic sales system will be Stredoslovenska energetika, a.s., Zilina. The system worth 200 million Sk (4,83 million Euro) will be supplied by Polish software company Winuel. The company should also supply a software that would allow forecasting and planning of sales. The system should be fully operational by 2006. TREND has not managed to obtain information regarding plans Zapadoslovenska energetika - the largest and most active distribution company - might have in this area. In eastern Slovakia distribution company Vychodoslovenska energetika, a.s., Kosice has also started addressing this issue. (Author)

  10. Aerodynamic Tests of the Space Launch System for Database Development

    Science.gov (United States)

    Pritchett, Victor E.; Mayle, Melody N.; Blevins, John A.; Crosby, William A.; Purinton, David C.

    2014-01-01

    The Aerosciences Branch (EV33) at the George C. Marshall Space Flight Center (MSFC) has been responsible for a series of wind tunnel tests on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) vehicles. The primary purpose of these tests was to obtain aerodynamic data during the ascent phase and establish databases that can be used by the Guidance, Navigation, and Mission Analysis Branch (EV42) for trajectory simulations. The paper describes the test particulars regarding models and measurements and the facilities used, as well as database preparations.

  11. 8th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Madeyski, Lech; Nguyen, Ngoc

    2016-01-01

    The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14–16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, mu...

  12. 9th Asian Conference on Intelligent Information and Database Systems

    CERN Document Server

    Nguyen, Ngoc; Shirai, Kiyoaki

    2017-01-01

    This book presents recent research in intelligent information and database systems. The carefully selected contributions were initially accepted for presentation as posters at the 9th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2017) held from to 5 April 2017 in Kanazawa, Japan. While the contributions are of an advanced scientific level, several are accessible for non-expert readers. The book brings together 47 chapters divided into six main parts: • Part I. From Machine Learning to Data Mining. • Part II. Big Data and Collaborative Decision Support Systems, • Part III. Computer Vision Analysis, Detection, Tracking and Recognition, • Part IV. Data-Intensive Text Processing, • Part V. Innovations in Web and Internet Technologies, and • Part VI. New Methods and Applications in Information and Software Engineering. The book is an excellent resource for researchers and those working in algorithmics, artificial and computational intelligence, collaborative systems, decisio...

  13. Thermodynamic database for the Co-Pr system.

    Science.gov (United States)

    Zhou, S H; Kramer, M J; Meng, F Q; McCallum, R W; Ott, R T

    2016-03-01

    In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.

  14. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  15. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  16. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  17. Enhanced distributed energy resource system

    Science.gov (United States)

    Atcitty, Stanley [Albuquerque, NM; Clark, Nancy H [Corrales, NM; Boyes, John D [Albuquerque, NM; Ranade, Satishkumar J [Las Cruces, NM

    2007-07-03

    A power transmission system including a direct current power source electrically connected to a conversion device for converting direct current into alternating current, a conversion device connected to a power distribution system through a junction, an energy storage device capable of producing direct current connected to a converter, where the converter, such as an insulated gate bipolar transistor, converts direct current from an energy storage device into alternating current and supplies the current to the junction and subsequently to the power distribution system. A microprocessor controller, connected to a sampling and feedback module and the converter, determines when the current load is higher than a set threshold value, requiring triggering of the converter to supply supplemental current to the power transmission system.

  18. Distributed Systems: The Hard Problems

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Nicholas Bellerophon** works as a client services engineer at Basho Technologies, helping customers setup and run distributed systems at scale in the wild. He has also worked in massively multiplayer games, and recently completed a live scalable simulation engine. He is an avid TED-watcher with interests in many areas of the arts, science, and engineering, including of course high-energy physics.

  19. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  20. Practical private database queries based on a quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Jakobi, Markus; Simon, Christoph; Gisin, Nicolas; Bancal, Jean-Daniel; Branciard, Cyril; Walenta, Nino; Zbinden, Hugo

    2011-01-01

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions in order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.

  1. Establishment of database system for management of KAERI wastes

    International Nuclear Information System (INIS)

    Shon, J. S.; Kim, K. J.; Ahn, S. J.

    2004-07-01

    Radioactive wastes generated by KAERI has various types, nuclides and characteristics. To manage and control these kinds of radioactive wastes, it comes to need systematic management of their records, efficient research and quick statistics. Getting information about radioactive waste generated and stored by KAERI is the basic factor to construct the rapid information system for national cooperation management of radioactive waste. In this study, Radioactive Waste Management Integration System (RAWMIS) was developed. It is is aimed at management of record of radioactive wastes, uplifting the efficiency of management and support WACID(Waste Comprehensive Integration Database System) which is a national radioactive waste integrated safety management system of Korea. The major information of RAWMIS supported by user's requirements is generation, gathering, transfer, treatment, and storage information for solid waste, liquid waste, gas waste and waste related to spent fuel. RAWMIS is composed of database, software (interface between user and database), and software for a manager and it was designed with Client/Server structure. RAWMIS will be a useful tool to analyze radioactive waste management and radiation safety management. Also, this system is developed to share information with associated companies. Moreover, it can be expected to support the technology of research and development for radioactive waste treatment

  2. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    Legger, F

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  3. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  4. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  5. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): A database of permitted public-supply wells, surface-water intakes, and systems in the United States

    Science.gov (United States)

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The U.S. Geological Survey (USGS) has developed a database containing information about wells, surface-water intakes, and distribution systems that are part of public water systems across the United States, its territories, and possessions. Programs of the USGS such as the National Water Census, the National Water Use Information Program, and the National Water-Quality Assessment Program all require a complete and current inventory of public water systems, the sources of water used by those systems, and the size of populations served by the systems across the Nation. Although the U.S. Environmental Protection Agency’s Safe Drinking Water Information System (SDWIS) database already exists as the primary national Federal database for information on public water systems, the Public-Supply Database (PSDB) was developed to add value to SDWIS data with enhanced location and ancillary information, and to provide links to other databases, including the USGS’s National Water Information System (NWIS) database.

  6. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    Science.gov (United States)

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  7. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  8. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  9. Distributed software framework and continuous integration in hydroinformatics systems

    Science.gov (United States)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  10. Implementation of Massive Real-time Database System Using Network Sensors and Sector Operation

    Directory of Open Access Journals (Sweden)

    Q. D. Sun

    2014-07-01

    Full Text Available In order to improve the stability of the massive database system and its capabilities of data processing and services, this paper proposed a distributed information server cluster. The N- level distributed architecture based on the network sensors was used to form a massive database system for real-time collection and query. Due to support by the network sensors, all components in this architecture have the functionality of plug and play. To efficiently schedule the tasks of storing gathered data and querying information, a dynamic and self-adaptive scheduling algorithm based on task sensors was introduced in the application server. The task sensor collects the load status from the common processes in various information servers by information collection processes in the application server and sends them to the scheduler in the same server, which dispatches the tasks of data storing into the most appropriate information server. Furthermore, an appropriative database system based on sector read-write directly was presented to improve the access speed, which is almost 25 times than that of database based on SYBASE or ORACLE. The practice shows that the system developed by this strategy has good flexibility and efficiency.

  11. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  12. Optical parallel database management system for page oriented holographic memeories

    Science.gov (United States)

    Fu, Jian; Schamschula, Marius P.; Caulfield, H. John

    1999-12-01

    Current data recall rates from page oriented holographic memories far exceed the ability of electronics toeven read or transmit the data. For database management, we must not only read those data but also query them-computationally, a far more complex task. That task is very severe for electronics. We show here the rudiments of an optical system that can do most of the query operations in parallel in optics, leaving the burden for electronics significantly less. Even here, electronics is the ultimate speed limiter. Nevertheless, we can query data far faster in our optical/electronic system than any purely electronic system.

  13. Computerized database management system for breast cancer patients.

    Science.gov (United States)

    Sim, Kok Swee; Chong, Sze Siang; Tso, Chih Ping; Nia, Mohsen Esmaeili; Chong, Aun Kee; Abbas, Siti Fathimah

    2014-01-01

    Data analysis based on breast cancer risk factors such as age, race, breastfeeding, hormone replacement therapy, family history, and obesity was conducted on breast cancer patients using a new enhanced computerized database management system. My Structural Query Language (MySQL) is selected as the application for database management system to store the patient data collected from hospitals in Malaysia. An automatic calculation tool is embedded in this system to assist the data analysis. The results are plotted automatically and a user-friendly graphical user interface is developed that can control the MySQL database. Case studies show breast cancer incidence rate is highest among Malay women, followed by Chinese and Indian. The peak age for breast cancer incidence is from 50 to 59 years old. Results suggest that the chance of developing breast cancer is increased in older women, and reduced with breastfeeding practice. The weight status might affect the breast cancer risk differently. Additional studies are needed to confirm these findings.

  14. Distributed Persistent Identifiers System Design

    Directory of Open Access Journals (Sweden)

    Pavel Golodoniuc

    2017-06-01

    Full Text Available The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID systems, of which there is a great variety in terms of technical and social implementation, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have, by in large, catered for identifier uniqueness, integrity, and persistence, regardless of the identifier’s application domain. Trustworthiness of these systems has been measured by the criteria first defined by Bütikofer (2009 and further elaborated by Golodoniuc 'et al'. (2016 and Car 'et al'. (2017. Since many PID systems have been largely conceived and developed by a single organisation they faced challenges for widespread adoption and, most importantly, the ability to survive change of technology. We believe that a cause of PID systems that were once successful fading away is the centralisation of support infrastructure – both organisational and computing and data storage systems. In this paper, we propose a PID system design that implements the pillars of a trustworthy system – ensuring identifiers’ independence of any particular technology or organisation, implementation of core PID system functions, separation from data delivery, and enabling the system to adapt for future change. We propose decentralisation at all levels — persistent identifiers and information objects registration, resolution, and data delivery — using Distributed Hash Tables and traditional peer-to-peer networks with information replication and caching mechanisms, thus eliminating the need for a central PID data store. This will increase overall system fault tolerance thus ensuring its trustworthiness. We also discuss important aspects of the distributed system’s governance, such as the notion of the authoritative source and data integrity

  15. Structure health monitoring system using internet and database technologies

    International Nuclear Information System (INIS)

    Kwon, Il Bum; Kim, Chi Yeop; Choi, Man Yong; Lee, Seung Seok

    2003-01-01

    Structural health monitoring system should developed to be based on internet and database technology in order to manage efficiently large structures. This system is operated by internet connected with the side of structures. The monitoring system has some functions: self monitoring, self diagnosis, and self control etc. Self monitoring is the function of sensor fault detection. If some sensors are not normally worked, then this system can detect the fault sensors. Also Self diagnosis function repair the abnormal condition of sensors. And self control is the repair function of the monitoring system. Especially, the monitoring system can identify the replacement of sensors. For further study, the real application test will be performed to check some unconvince.

  16. Central Appalachian basin natural gas database: distribution, composition, and origin of natural gases

    Science.gov (United States)

    Román Colón, Yomayra A.; Ruppert, Leslie F.

    2015-01-01

    The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.

  17. A machine reading system for assembling synthetic paleontological databases.

    Directory of Open Access Journals (Sweden)

    Shanan E Peters

    Full Text Available Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry.

  18. 14 CFR 25.1355 - Distribution system.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Distribution system. 25.1355 Section 25... Distribution system. (a) The distribution system includes the distribution busses, their associated feeders... power for particular equipment or systems are required by this chapter, in the event of the failure of...

  19. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  20. An engineering database management system for spacecraft operations

    Science.gov (United States)

    Cipollone, Gregorio; Mckay, Michael H.; Paris, Joseph

    1993-01-01

    Studies at ESOC have demonstrated the feasibility of a flexible and powerful Engineering Database Management System in support for spacecraft operations documentation. The objectives set out were three-fold: first an analysis of the problems encountered by the Operations team in obtaining and managing operations documents; secondly, the definition of a concept for operations documentation and the implementation of prototype to prove the feasibility of the concept; and thirdly, definition of standards and protocols required for the exchange of data between the top-level partners in a satellite project. The EDMS prototype was populated with ERS-l satellite design data and has been used by the operations team at ESOC to gather operational experience. An operational EDMS would be implemented at the satellite prime contractor's site as a common database for all technical information surrounding a project and would be accessible by the cocontractor's and ESA teams.

  1. Monitoring the DIRAC distributed system

    CERN Document Server

    Santinelli, R; Nandakumar, R

    2010-01-01

    DIRAC, the LHCb community Grid solution, is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed systems. The monitoring is done collecting information from two sources – one is from pinging the services or by keeping track of the regular heartbeats of the agents, and the other from the analysis of the error messages generated both by agents and services and collected by a logging system. This allows us to ensure that the components are running properly and to collect useful information regarding their operations. The process status monitoring is displayed using the SLS sensor mechanism that also automatically allows to plot various quantities and keep a history of the system. A dedicated GridMap interface (Service...

  2. Parallel and Distributed System Simulation

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our research into the software infrastructure necessary to support the modeling and simulation techniques that are most appropriate for the Information Power Grid. Such computational power grids will use high-performance networking to connect hardware, software, instruments, databases, and people into a seamless web that supports a new generation of computation-rich problem solving environments for scientists and engineers. In this context we looked at evaluating the NetSolve software environment for network computing that leverages the potential of such systems while addressing their complexities. NetSolve's main purpose is to enable the creation of complex applications that harness the immense power of the grid, yet are simple to use and easy to deploy. NetSolve uses a modular, client-agent-server architecture to create a system that is very easy to use. Moreover, it is designed to be highly composable in that it readily permits new resources to be added by anyone willing to do so. In these respects NetSolve is to the Grid what the World Wide Web is to the Internet. But like the Web, the design that makes these wonderful features possible can also impose significant limitations on the performance and robustness of a NetSolve system. This project explored the design innovations that push the performance and robustness of the NetSolve paradigm as far as possible without sacrificing the Web-like ease of use and composability that make it so powerful.

  3. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  4. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  5. Distributed optimal coordination for distributed energy resources in power systems

    DEFF Research Database (Denmark)

    Wu, Di; Yang, Tao; Stoorvogel, A.

    2017-01-01

    Driven by smart grid technologies, distributed energy resources (DERs) have been rapidly developing in recent years for improving reliability and efficiency of distribution systems. Emerging DERs require effective and efficient coordination in order to reap their potential benefits. In this paper......, we consider an optimal DER coordination problem over multiple time periods subject to constraints at both system and device levels. Fully distributed algorithms are proposed to dynamically and automatically coordinate distributed generators with multiple/single storages. With the proposed algorithms...

  6. Small Aircraft Data Distribution System

    Science.gov (United States)

    Chazanoff, Seth L.; Dinardo, Steven J.

    2012-01-01

    The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.

  7. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  8. Automated Energy Distribution and Reliability System Status Report

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.; Perry, S.

    2007-10-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  9. Automated Energy Distribution and Reliability System (AEDR): Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.

    2008-07-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  10. DC Distribution Systems and Microgrids

    DEFF Research Database (Denmark)

    Dragicevic, Tomislav; Anvari-Moghaddam, Amjad; Quintero, Juan Carlos Vasquez

    2017-01-01

    A qualitative overview of different hardware topologies and control systems for DC MGs has been presented in this chapter. Some challenges and design considerations of DC protections systems have also been discussed. Finally, applications of DC MGs in emerging smart grid applications have been su...... in different industries and gradually lead to new ways of rethinking of the future power distribution philosophies, especially with the emergence of SSTs. Research in DC systems, especially in the power electronics-based technologies will be highly attractive in the future.......A qualitative overview of different hardware topologies and control systems for DC MGs has been presented in this chapter. Some challenges and design considerations of DC protections systems have also been discussed. Finally, applications of DC MGs in emerging smart grid applications have been...... summarized. Due to its attractive characteristics in terms of compliance with modern generation, storage and electronic load technologies, high reliability and current carrying capacity, as well as simple control, DC systems are already an indispensable part of power systems. Moreover, the existing...

  11. Integrated Electronic Health Record Database Management System: A Proposal.

    Science.gov (United States)

    Schiza, Eirini C; Panos, George; David, Christiana; Petkov, Nicolai; Schizas, Christos N

    2015-01-01

    eHealth has attained significant importance as a new mechanism for health management and medical practice. However, the technological growth of eHealth is still limited by technical expertise needed to develop appropriate products. Researchers are constantly in a process of developing and testing new software for building and handling Clinical Medical Records, being renamed to Electronic Health Record (EHR) systems; EHRs take full advantage of the technological developments and at the same time provide increased diagnostic and treatment capabilities to doctors. A step to be considered for facilitating this aim is to involve more actively the doctor in building the fundamental steps for creating the EHR system and database. A global clinical patient record database management system can be electronically created by simulating real life medical practice health record taking and utilizing, analyzing the recorded parameters. This proposed approach demonstrates the effective implementation of a universal classic medical record in electronic form, a procedure by which, clinicians are led to utilize algorithms and intelligent systems for their differential diagnosis, final diagnosis and treatment strategies.

  12. Fossil-Fuel C02 Emissions Database and Exploration System

    Science.gov (United States)

    Krassovski, M.; Boden, T.

    2012-04-01

    Fossil-Fuel C02 Emissions Database and Exploration System Misha Krassovski and Tom Boden Carbon Dioxide Information Analysis Center Oak Ridge National Laboratory The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) quantifies the release of carbon from fossil-fuel use and cement production each year at global, regional, and national spatial scales. These estimates are vital to climate change research given the strong evidence suggesting fossil-fuel emissions are responsible for unprecedented levels of carbon dioxide (CO2) in the atmosphere. The CDIAC fossil-fuel emissions time series are based largely on annual energy statistics published for all nations by the United Nations (UN). Publications containing historical energy statistics make it possible to estimate fossil-fuel CO2 emissions back to 1751 before the Industrial Revolution. From these core fossil-fuel CO2 emission time series, CDIAC has developed a number of additional data products to satisfy modeling needs and to address other questions aimed at improving our understanding of the global carbon cycle budget. For example, CDIAC also produces a time series of gridded fossil-fuel CO2 emission estimates and isotopic (e.g., C13) emissions estimates. The gridded data are generated using the methodology described in Andres et al. (2011) and provide monthly and annual estimates for 1751-2008 at 1° latitude by 1° longitude resolution. These gridded emission estimates are being used in the latest IPCC Scientific Assessment (AR4). Isotopic estimates are possible thanks to detailed information for individual nations regarding the carbon content of select fuels (e.g., the carbon signature of natural gas from Russia). CDIAC has recently developed a relational database to house these baseline emissions estimates and associated derived products and a web-based interface to help users worldwide query these data holdings. Users can identify, explore and download desired CDIAC

  13. Development of sorption database (JAEA-SDB). Update of sorption data including soil and cement systems

    International Nuclear Information System (INIS)

    Suyama, Tadahiro; Tachi, Yukio

    2012-03-01

    Sorption of radionuclides in buffer materials (bentonite) and rocks is the key process in the safe geological disposal of radioactive waste, because migration of radionuclides in this barrier is expected to be controlled by sorption processes. Distribution coefficient (K d ) is therefore important parameter in the performance assessment (PA) of geological disposal. The sorption database including extensive compilations of K d data measured by batch sorption experiments plays key roles in PA-related K d setting and predictive model development under a variety of geochemical conditions. For this purpose, Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) as an important basis for the PA of high-level radioactive waste disposal. This sorption database was firstly developed for the H12 PA, and was improved and updated in view of potential future data needs, focusing on assuring the desired quality level and testing the usefulness of the databases for possible applications to PA-related parameter setting. The present report focuses on updating of the sorption database (JAEA-SDB) by adding K d data for various systems including soil and cement systems, to apply JAEA-SDB for the PA-related K d setting for disposal of low level radioactive wastes including TRU wastes and the evaluation of radionuclide transport in surface soil systems. The updated data includes K d data for soil and cement systems extracted from mainly previous published database, and K d data related to our recent activities on the K d setting and mechanistic model development. As a result, 16,000 K d data from 334 references are added, total K d values in the JAEA-SDB are about 46,000. The updated JAEA-SDB is expected to make it possible to obtain quick overview of the available data, and to have suitable access to the respective data for the performance assessment of various types of radioactive waste. (author)

  14. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the benchmark team, PSU in collaboration with US NRC has performed supporting calculations using the PSU in-house advanced thermal-hydraulic subchannel code CTF and the US NRC system code TRACE. CTF is a version of COBRA-TF whose models have been continuously improved and validated by the RDFMG group at PSU. TRACE is a reactor systems code developed by US NRC to analyze transient and steady-state thermal-hydraulic behavior in LWRs and it has been designed to perform best-estimate analyses of LOCA, operational transients, and other accident scenarios in PWRs and BWRs. The paper presents CTF and TRACE models for the PSBT void distribution exercises. Code-to-code and code-to-data comparisons are provided along with a discussion of the void generation and void distribution models available in the two codes.

  15. METODE RESET PASSWORD LEVEL ROOT PADA RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS MySQL

    Directory of Open Access Journals (Sweden)

    Taqwa Hariguna

    2011-08-01

    Full Text Available Database merupakan sebuah hal yang penting untuk menyimpan data, dengan database organisasi akan mendapatkan keuntungan dalam beberapa hal, seperti kecepatan akases dan mengurangi penggunaan kertas, namun dengan implementasi database tidak jarang administrator database lupa akan password yang digunakan, hal ini akan mempersulit dalam proses penangganan database. Penelitian ini bertujuan untuk menggali cara mereset password level root pada relational database management system MySQL.

  16. How I do it: a practical database management system to assist clinical research teams with data collection, organization, and reporting.

    Science.gov (United States)

    Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe

    2015-04-01

    The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  17. User Registration Systems for Distributed Systems

    Science.gov (United States)

    Murphy, K. J.; Cechini, M.; Pilone, D.; Mitchell, A.

    2010-12-01

    As NASA’s Earth Observing System Data and Information System (EOSDIS) systems have evolved over the years, most of the EOSDIS data are now available to users via anonymous on-line access. Although the changes have improved the dissemination efficiency of earth science data, the anonymous access has made it difficult to characterize users, capture metrics on the value of EOSDIS and provide customized services that benefit users. As the number of web-based applications continues to grow, data centers and application providers have implemented their own user registration systems and provided new tools and interfaces for their registered users. This has led to the creation of independent registration systems for accessing data and interacting with online tools and services. The user profile information maintained at each of these registration systems is not consistent and the registration enforcement varies by system as well. This problem is in no way unique to EOSDIS and represents a general challenge to the distributed computing community. In a study done in 2007(http://www2007.org/papers/paper620.pd), the average user has approximately 7 passwords for about 25 accounts and enters a password 8 times a day. These numbers have only increased in the last three years. To try and address this, a number of solutions have been offered including Single Sign-On solutions using a common backend like Microsoft Active Directory or an LDAP server, trust based identity providers like OpenID, and various forms of authorization delegation like OAuth or SAML/XACML. This talk discusses the differences between authentication and authorization, the state of the more popular user registration solutions available for distributed use, and some of the technical and policy drivers that need to be considered when incorporating a user registration system into your application.

  18. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Science.gov (United States)

    2011-01-01

    Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed. PMID:21281459

  19. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    International Nuclear Information System (INIS)

    Wolery, T.W.; Sutton, M.

    2011-01-01

    they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  20. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T W; Sutton, M

    2011-09-19

    , meaning that they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  1. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  2. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  3. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  4. Reliability Evaluation of Distribution System with Distributed Generation

    Science.gov (United States)

    Chen, Guoyan; Zhang, Feng; You, Dahai; Wang, Yong; Lu, Guojun; Zou, Qi; Liu, Hengwei; Qian, Junjie; Xu, Heng

    2017-07-01

    Distribution system reliability assessment is an important part of power system reliability assessment. In recent years, distributed generations (DG) are more and more connected to distribution system because of its flexible and friendly environment features, which imposes a great influence on distribution system reliability. Hence, a reliability evaluation method suitable for distribution system with DG is imperative, which is proposed in this paper. First, a probabilistic model of DG output is established based on the generation characteristics of DG. Second, the island operation mode of distribution system with DG is researched, subsequently, the calculation method of the probability of island successful operation is put forward on the basis of DG model and the load model. Third, a reliability assessment methodology of distribution system with DG is proposed by improving the traditional minimal path algorithm for reliability evaluation of distribution system. Finally, some results are obtained by applying the proposed method to the IEEE-RBTS Bus6 system, which are consistent with the well-known facts. In this way, the proposed method is proved to be reasonable and effective.

  5. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Science.gov (United States)

    2010-04-09

    ... Facility Charge Database System for Air Carrier Reporting AGENCY: Federal Aviation Administration (FAA... the Passenger Facility Charge (PFC) database system to report PFC quarterly report information. In... the FAA's PFC database system, those air carriers and public agencies participating in the system will...

  6. Loss Allocation in a Distribution System with Distributed Generation Units

    DEFF Research Database (Denmark)

    Lund, Torsten; Nielsen, Arne Hejde; Sørensen, Poul Ejnar

    2007-01-01

    In Denmark, a large part of the electricity is produced by wind turbines and combined heat and power plants (CHPs). Most of them are connected to the network through distribution systems. This paper presents a new algorithm for allocation of the losses in a distribution system with distributed...... generation. The algorithm is based on a reduced impedance matrix of the network and current injections from loads and production units. With the algorithm, the effect of the covariance between production and consumption can be evaluated. To verify the theoretical results, a model of the distribution system...

  7. Application of modern reliability database techniques to military system data

    International Nuclear Information System (INIS)

    Bunea, Cornel; Mazzuchi, Thomas A.; Sarkani, Shahram; Chang, H.-C.

    2008-01-01

    This paper focuses on analysis techniques of modern reliability databases, with an application to military system data. The analysis of military system data base consists of the following steps: clean the data and perform operation on it in order to obtain good estimators; present simple plots of data; analyze the data with statistical and probabilistic methods. Each step is dealt with separately and the main results are presented. Competing risks theory is advocated as the mathematical support for the analysis. The general framework of competing risks theory is presented together with simple independent and dependent competing risks models available in literature. These models are used to identify the reliability and maintenance indicators required by the operating personnel. Model selection is based on graphical interpretation of plotted data

  8. Energy Management of Smart Distribution Systems

    Science.gov (United States)

    Ansari, Bananeh

    Electric power distribution systems interface the end-users of electricity with the power grid. Traditional distribution systems are operated in a centralized fashion with the distribution system owner or operator being the only decision maker. The management and control architecture of distribution systems needs to gradually transform to accommodate the emerging smart grid technologies, distributed energy resources, and active electricity end-users or prosumers. The content of this document concerns with developing multi-task multi-objective energy management schemes for: 1) commercial/large residential prosumers, and 2) distribution system operator of a smart distribution system. The first part of this document describes a method of distributed energy management of multiple commercial/ large residential prosumers. These prosumers not only consume electricity, but also generate electricity using their roof-top solar photovoltaics systems. When photovoltaics generation is larger than local consumption, excess electricity will be fed into the distribution system, creating a voltage rise along the feeder. Distribution system operator cannot tolerate a significant voltage rise. ES can help the prosumers manage their electricity exchanges with the distribution system such that minimal voltage fluctuation occurs. The proposed distributed energy management scheme sizes and schedules each prosumer's ES to reduce the electricity bill and mitigate voltage rise along the feeder. The second part of this document focuses on emergency energy management and resilience assessment of a distribution system. The developed emergency energy management system uses available resources and redundancy to restore the distribution system's functionality fully or partially. The success of the restoration maneuver depends on how resilient the distribution system is. Engineering resilience terminology is used to evaluate the resilience of distribution system. The proposed emergency energy

  9. General, database-driven fast-feedback system for the Stanford Linear Collider

    International Nuclear Information System (INIS)

    Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.; Shoaee, H.

    1991-05-01

    A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs

  10. Development of the Plasma Movie Database System for JT-60

    International Nuclear Information System (INIS)

    Sueoka, M.; Kawamata, Y.; Kurihara, K.

    2006-01-01

    A plasma movie is generally expected as one of the most efficient methods to know what plasma discharge has been conducted in the experiment. On this motivation we have developed and operated a real-time plasma shape visualization system over ten years. The current plasma movie is composed of (1) video camera picture looking at a plasma, (2) computer graphic (CG) picture, and (3) magnetic probe signal as a sound channel. (1) The plasma video movie is provided by a standard video camera, equipped at the viewing port of the vacuum vessel looking at a plasma poloidal cross section. (2) A plasma shape CG movie is provided by the plasma shape visualization system, which calculates the plasma shape in real-time using the CCS method [Kurihara, K., Fusion Engineering and Design, 51-52, 1049 (2000)]. Thirty snap-shot pictures per second are drawn by the graphic processor. (3) A sound in the movie is a raw signal of magnetic pick up coil. This sound reflects plasma rotation frequency which shows smooth high tone sound seems to mean a good plasma. In order to use this movie efficiently, we have developed a new system having the following functions: (a) To store a plasma movie in the movie database system automatically combined with the plasma shape CG and the sound according to a discharge sequence. (b) To make a plasma movie be available (downloadable) for experiment data analyses at the Web-site. The plasma movie capture system receives the timing signal according to the JT-60 discharge sequence, and starts to record a plasma movie automatically. The movie is stored in a format of MPEG2 in the RAID-disk. In addition, the plasma movie capture system transfers a movie file in a MPEG4 format to the plasma movie web-server at the same time. In response to the user's request the plasma movie web-server transfers a stored movie data immediately. The movie data amount for the MPEG2 format is about 50 Mbyte/shot (65 s discharge), and that for the MPEG4 format is about 7 Mbyte

  11. Protection of Distribution Systems with Distributed Energy Resources

    DEFF Research Database (Denmark)

    Bak-Jensen, Birgitte; Browne, Matthew; Calone, Roberto

    The usage of Distributed Energy Resources (DER) in utilities around the world is expected to increase significantly. The existing distribution systems have been generally designed for unidirectional power flow, and feeders are opened and locked out for any fault within. However, in the future...... this practice may lead to a loss of significant generation where each feeder may have significant DER penetration. Also, utilities have started to investigate islanding operation of distribution systems with DER as a way to improve the reliability of the power supply to customers. This report is the result...... of 17 months of work of the Joint Working Group B5/C6.26/CIRED “Protection of Distribution Systems with Distributed Energy Resources”. The working group used the CIGRE report TB421 “The impact of Renewable Energy Sources and Distributed Generation on Substation Protection and Automation”, published...

  12. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  13. 78 FR 2363 - Notification of Deletion of a System of Records; Automated Trust Funds Database

    Science.gov (United States)

    2013-01-11

    ... Database AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice of deletion of a system... establishing the Automated Trust Funds (ATF) database system of records. The Federal Information Security... Integrity Act of 1982, Public Law 97-255, provided authority for the system. The ATF database has been...

  14. Visualizing Concurrency Control Algorithms for Real-Time Database Systems

    Directory of Open Access Journals (Sweden)

    Olusegun Folorunso

    2008-11-01

    Full Text Available This paper describes an approach to visualizing concurrency control (CC algorithms for real-time database systems (RTDBs. This approach is based on the principle of software visualization, which has been applied in related fields. The Model-View-controller (MVC architecture is used to alleviate the black box syndrome associated with the study of algorithm behaviour for RTDBs Concurrency Controls. We propose a Visualization "exploratory" tool that assists the RTDBS designer in understanding the actual behaviour of the concurrency control algorithms of choice and also in evaluating the performance quality of the algorithm. We demonstrate the feasibility of our approach using an optimistic concurrency control model as our case study. The developed tool substantiates the earlier simulation-based performance studies by exposing spikes at some points when visualized dynamically that are not observed using usual static graphs. Eventually this tool helps solve the problem of contradictory assumptions of CC in RTDBs.

  15. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  16. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  17. The Impact of Connecting Distributed Generation to the Distribution System

    Directory of Open Access Journals (Sweden)

    E. V. Mgaya

    2007-01-01

    Full Text Available This paper deals with the general problem of utilizing of renewable energy sources to generate electric energy. Recent advances in renewable energy power generation technologies, e.g., wind and photovoltaic (PV technologies, have led to increased interest in the application of these generation devices as distributed generation (DG units. This paper presents the results of an investigation into possible improvements in the system voltage profile and reduction of system losses when adding wind power DG (wind-DG to a distribution system. Simulation results are given for a case study, and these show that properly sized wind DGs, placed at carefully selected sites near key distribution substations, could be very effective in improving the distribution system voltage profile and reducing power losses, and hence could  improve the effective capacity of the system

  18. Control and operation of distributed generation in distribution systems

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2011-01-01

    Many distribution systems nowadays have significant penetration of distributed generation (DG)and thus, islanding operation of these distribution systems is becoming a viable option for economical and technical reasons. The DG should operate optimally during both grid-connected and island...... algorithm, which uses average rate of change off requency (Af5) and real power shift RPS), in the islanded mode. RPS will increase or decrease the power set point of the generator with increasing or decreasing system frequency, respectively. Simulation results show that the proposed method can operate...

  19. High-precision positioning system of four-quadrant detector based on the database query

    Science.gov (United States)

    Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang

    2015-02-01

    The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.

  20. Wind Power in Electrical Distribution Systems

    DEFF Research Database (Denmark)

    Chen, Zhe

    2013-01-01

    Recent years, wind power is experiencing a rapid growth, large number of wind turbines/wind farms have been installed and connected to power systems. In addition to the large centralised wind farms connected to transmission grids, many distributed wind turbines and wind farms are operated...... as distributed generators in distribution systems. This paper discusses the issues of wind turbines in distribution systems. Wind power conversion systems briefly introduced, the basic features and technical characteristics of distributed wind power system are described, and the main technical demands...

  1. Islanding Operation of Distribution System with Distributed Generations

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2010-01-01

    The growing interest in distributed generations (DGs) due to environmental concern and various other reasons have resulted in significant penetration of DGs in many distribution system worldwide. DGs come with many benefits. One of the benefits is improved reliability by supplying load during power...

  2. The Nuclear Science References (NSR) Database and Web Retrieval System

    OpenAIRE

    Pritychenko, B.; Betak, E.; Kellett, M. A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...

  3. System maintenance test plan for the TWRS controlled baseline database system

    International Nuclear Information System (INIS)

    Spencer, S.G.

    1998-01-01

    TWRS [Tank Waste Remediation System] Controlled Baseline Database, formally known as the Performance Measurement Control System, is used to track and monitor TWRS project management baseline information. This document contains the maintenance testing approach for software testing of the TCBD system once SCR/PRs are implemented

  4. Runtime Verification for Decentralised and Distributed Systems

    NARCIS (Netherlands)

    Francalanza, Adrian; Pérez, Jorge A.; Sánchez, César; Bartocci, Ezio; Falcone, Yliès

    This chapter surveys runtime verification research related to distributed systems. We report solutions that study how to monitor system with some distributed characteristic, solutions that use a distributed platform for performing a monitoring task, and foundational works that present semantics for

  5. RELIABILITY ANALYSIS OF POWER DISTRIBUTION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Popescu V.S.

    2012-04-01

    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.

  6. Vespucci: a system for building annotated databases of nascent transcripts.

    Science.gov (United States)

    Allison, Karmel A; Kaikkonen, Minna U; Gaasterland, Terry; Glass, Christopher K

    2014-02-01

    Global run-on sequencing (GRO-seq) is a recent addition to the series of high-throughput sequencing methods that enables new insights into transcriptional dynamics within a cell. However, GRO-sequencing presents new algorithmic challenges, as existing analysis platforms for ChIP-seq and RNA-seq do not address the unique problem of identifying transcriptional units de novo from short reads located all across the genome. Here, we present a novel algorithm for de novo transcript identification from GRO-sequencing data, along with a system that determines transcript regions, stores them in a relational database and associates them with known reference annotations. We use this method to analyze GRO-sequencing data from primary mouse macrophages and derive novel quantitative insights into the extent and characteristics of non-coding transcription in mammalian cells. In doing so, we demonstrate that Vespucci expands existing annotations for mRNAs and lincRNAs by defining the primary transcript beyond the polyadenylation site. In addition, Vespucci generates assemblies for un-annotated non-coding RNAs such as those transcribed from enhancer-like elements. Vespucci thereby provides a robust system for defining, storing and analyzing diverse classes of primary RNA transcripts that are of increasing biological interest.

  7. Preliminary design of the database and registration system for the national malignant tumor interventional therapy

    International Nuclear Information System (INIS)

    Hu Di; Zeng Jinjin; Wang Jianfeng; Zhai Renyou

    2010-01-01

    Objective: This research is one of the sub-researches of 'The comparative study of the standards of interventional therapies and the evaluation of the long-term and middle-term effects for common malignant tumors', which is one of the National Key Technologies R and D Program in the eleventh five-year plan. Based on the project,the authors need to establish an international standard in order to set up the national tumor interventional therapy database and registration system. Methods: By using the computing programs of downloading software, self-management and automatic integration, the program was written by the JAVA words. Results: The database and registration system for the national tumor interventional therapy was successfully set up, and it could complete both the simple and complex inquiries. The software worked well through the initial debugging. Conclusion: The national tumor interventional therapy database and registration system can not only precisely tell the popularizing rate of the interventional therapy nationwide, compare the results of different methods, provide the latest news concerning the interventional therapy, subsequently promote the academic exchanges between hospitals, but also help us get the information about the distribution of the interventional physicians, the consuming quantity and variety of the interventional materials, so the medical costs can be reduced. (authors)

  8. Protection of Distribution Systems with Distributed Energy Resources

    DEFF Research Database (Denmark)

    Bak-Jensen, Birgitte; Browne, Matthew; Calone, Roberto

    of 17 months of work of the Joint Working Group B5/C6.26/CIRED “Protection of Distribution Systems with Distributed Energy Resources”. The working group used the CIGRE report TB421 “The impact of Renewable Energy Sources and Distributed Generation on Substation Protection and Automation”, published......The usage of Distributed Energy Resources (DER) in utilities around the world is expected to increase significantly. The existing distribution systems have been generally designed for unidirectional power flow, and feeders are opened and locked out for any fault within. However, in the future...... by WG B5.34 as the entry document for the work on this report. In doing so, the group aligned the content and the scope of this report, the network structures considered, possible islanding, standardized communication and adaptive protection, interface protection, connection schemes and protection...

  9. Review on Islanding Operation of Distribution System with Distributed Generation

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2011-01-01

    The growing environmental concern and various benefits of distributed generation (DG) have resulted in significant penetration of DG in many distribution systems worldwide. One of the major expected benefits of DG is the improvement in the reliability of power supply by supplying load during power...... outage by operating in an island mode. However, there are many challenges to overcome before islanding operation of a distribution system with DG can become a viable solution in future. This paper reviews some of the major challenges with islanding operation and explores some possible solutions...

  10. Distributed radiation protection console system

    International Nuclear Information System (INIS)

    Chhokra, R.S.; Deshpande, V.K.; Mishra, H.; Rajeev, K.P.; Thakur, Bipla B.; Munj, Niket

    2004-01-01

    Radiation exposure control is one of the most important aspects in any nuclear facility . It encompasses continuous monitoring of the various areas of the facility to detect any increase in the radiation level and/or the air activity level beyond preset limits and alarm the O and M personnel working in these areas. Detection and measurement of radiation level and the air activity level is carried out by a number of monitors installed in the areas. These monitors include Area Gamma Monitors, Continuous Air Monitors, Pu-In-Air Monitors, Criticality Monitors etc. Traditionally, these measurements are displayed and recorded on a Central Radiation Protection Console(CRPC), which is located in the central control room of the facility. This methodology suffers from the shortcoming that any worker required to enter a work area will have to inquire about the radiation status of the area either from the CRPC or will get to know the same directly from the installed only after entering the area. This shortcoming can lead to avoidable delays in attending to the work or to unwanted exposure. The authors have designed and developed a system called Distributed Radiation Protection Console (DRPC) to overcome this shortcoming. A DRPC is a console which is located outside the entrance of a given area and displays the radiation status of the area. It presents to health physicist and the plant operators a graphic over-view of the radiation and air activity levels in the particular area of the plant. It also provides audio visual annunciation of the alarm status. Each radioactive area in a nuclear facility will have its own DRPC, which will receive as its inputs the analog and digital signals from radiation monitoring instruments installed in the area and would not only show those readings on its video graphic screen but will also provide warning messages and instructions to the personnel entering the active areas. The various DRPCs can be integrated into a Local Area Network, where the

  11. Water sample-collection and distribution system

    Science.gov (United States)

    Brooks, R. R.

    1978-01-01

    Collection and distribution system samples water from six designated stations, filtered if desired, and delivers it to various analytical sensors. System may be controlled by Water Monitoring Data Acquisition System or operated manually.

  12. Declarative testing and depolyment of distributed systems

    NARCIS (Netherlands)

    Van der Burg, S.; Dolstra, E.

    2010-01-01

    System administrators and developers who deploy distributed systems have to deal with a deployment process that is largely manual and hard to reproduce. This paper describes how networks of computer systems can be reproducibly and automatically deployed from declarative specifications.

  13. Research in Distributed Real-Time Systems

    Science.gov (United States)

    Mukkamala, R.

    1997-01-01

    This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.

  14. DIAMONDS: Engineering Distributed Object Systems

    National Research Council Canada - National Science Library

    Cheng, Evan

    1997-01-01

    This report describes DIAMONDS, a research project at Syracuse University, that is dedicated to producing both a methodology and corresponding tools to assist in the development of heterogeneous distributed software...

  15. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database

    Energy Technology Data Exchange (ETDEWEB)

    Brown, S.

    2002-02-07

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO{trademark} exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages. This database includes ten ARC/INFO exported integer grid files (five with the pixel size 3.75 km x 3.75 km and five with the pixel size 0.25 degree longitude x 0.25 degree latitude) and 27 ASCII files. The first ASCII file contains the documentation associated with this database. Twenty-four of the ASCII files were generated by means of the ARC/INFO GRIDASCII command and can be used by most raster-based GIS software packages. The 24 files can be subdivided into two groups of 12 files each. These files contain real data values representing actual carbon and potential carbon density in Mg C/ha (1 megagram = 10{sup 6} grams) and integer- coded values for country name, Weck's Climatic Index, ecofloristic zone, elevation, forest or non-forest designation, population density, mean annual precipitation, slope, soil texture, and vegetation classification. One set of 12 files contains these data at a spatial resolution of 3.75 km

  16. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database

    Energy Technology Data Exchange (ETDEWEB)

    Brown, S

    2001-05-22

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFOTM exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages. This database includes ten ARC/INFO exported integer grid files (five with the pixel size 3.75 km x 3.75 km and five with the pixel size 0.25 degree longitude x 0.25 degree latitude) and 27 ASCII files. The first ASCII file contains the documentation associated with this database. Twenty-four of the ASCII files were generated by means of the ARC/INFO GRIDASCII command and can be used by most raster-based GIS software packages. The 24 files can be subdivided into two groups of 12 files each. These files contain real data values representing actual carbon and potential carbon density in Mg C/ha (1 megagram = 10{sup 6} grams) and integer-coded values for country name, Weck's Climatic Index, ecofloristic zone, elevation, forest or non-forest designation, population density, mean annual precipitation, slope, soil texture, and vegetation classification. One set of 12 files contains these data at a spatial resolution of 3.75 km, whereas the

  17. CancerHSP: anticancer herbs database of systems pharmacology

    Science.gov (United States)

    Tao, Weiyang; Li, Bohui; Gao, Shuo; Bai, Yaofei; Shar, Piar Ali; Zhang, Wenjuan; Guo, Zihu; Sun, Ke; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Mu, Jiexin; Pei, Tianli; Wang, Yuan; Li, Yan; Wang, Yonghua

    2015-06-01

    The numerous natural products and their bioactivity potentially afford an extraordinary resource for new drug discovery and have been employed in cancer treatment. However, the underlying pharmacological mechanisms of most natural anticancer compounds remain elusive, which has become one of the major obstacles in developing novel effective anticancer agents. Here, to address these unmet needs, we developed an anticancer herbs database of systems pharmacology (CancerHSP), which records anticancer herbs related information through manual curation. Currently, CancerHSP contains 2439 anticancer herbal medicines with 3575 anticancer ingredients. For each ingredient, the molecular structure and nine key ADME parameters are provided. Moreover, we also provide the anticancer activities of these compounds based on 492 different cancer cell lines. Further, the protein targets of the compounds are predicted by state-of-art methods or collected from literatures. CancerHSP will help reveal the molecular mechanisms of natural anticancer products and accelerate anticancer drug development, especially facilitate future investigations on drug repositioning and drug discovery. CancerHSP is freely available on the web at http://lsp.nwsuaf.edu.cn/CancerHSP.php.

  18. Representativeness of the Spinal Cord Injury Model Systems National Database.

    Science.gov (United States)

    Ketchum, Jessica M; Cuthbert, Jeffrey P; Deutsch, Anne; Chen, Yuying; Charlifue, Susan; Chen, David; Dijkers, Marcel P; Graham, James E; Heinemann, Allen W; Lammertse, Daniel P; Whiteneck, Gale G

    2018-02-01

    Secondary analysis of prospectively collected observational data. To assess the representativeness of the Spinal Cord Injury Model Systems National Database (SCIMS-NDB) of all adults aged 18 years or older receiving inpatient rehabilitation in the United States (US) for new onset traumatic spinal cord injury (TSCI). Inpatient rehabilitation centers in the US. We compared demographic, functional status, and injury characteristics (nine categorical variables comprising of 46 categories and two continuous variables) between the SCIMS-NDB (N = 5969) and UDS-PRO/eRehabData (N = 99,142) cases discharged from inpatient rehabilitation in 2000-2010. There are negligible differences (exist for age categories, sex, race/ethnicity, marital status, FIM Motor score, and time from injury to rehabilitation admission. Important differences (>10%) exist in mean age and preinjury occupational status; the SCIMS-NDB sample was younger and included a higher percentage of individuals who were employed (62.7 vs. 41.7%) and fewer who were retired (10.2 vs. 36.1%). Adults in the SCIMS-NDB are largely representative of the population of adults receiving inpatient rehabilitation for new onset TSCI in the US. However, users of the SCIMS-NDB may need to adjust statistically for differences in age and preinjury occupational status to improve generalizability of findings.

  19. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  20. On Distributed Port-Hamiltonian Process Systems

    NARCIS (Netherlands)

    Lopezlena, Ricardo; Scherpen, Jacquelien M.A.

    2004-01-01

    In this paper we use the term distributed port-Hamiltonian Process Systems (DPHPS) to refer to the result of merging the theory of distributed Port-Hamiltonian systems (DPHS) with the theory of process systems (PS). Such concept is useful for combining the systematic interconnection of PHS with the

  1. A brief introduction to distributed systems

    NARCIS (Netherlands)

    van Steen, Maarten; Tanenbaum, Andrew S.

    2016-01-01

    Distributed systems are by now commonplace, yet remain an often difficult area of research. This is partly explained by the many facets of such systems and the inherent difficulty to isolate these facets from each other. In this paper we provide a brief overview of distributed systems: what they

  2. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  3. Intelligent Systems for Power Management and Distribution

    Science.gov (United States)

    Button, Robert M.

    2002-01-01

    The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.

  4. Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems.

    Science.gov (United States)

    Heersmink, Richard

    2017-04-01

    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.

  5. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  6. Jaguar: Extending the Predator Database System with JAVA

    National Research Council Canada - National Science Library

    Bonnet, Philippe

    2001-01-01

    .... Indeed, database applications will soon be accessed by large number of clients ranging from Web applications to small-scale personal devices and they will in turn access large collections of data...

  7. Computer system for International Reactor Pressure Vessel Materials Database support

    International Nuclear Information System (INIS)

    Arutyunjan, R.; Kabalevsky, S.; Kiselev, V.; Serov, A.

    1997-01-01

    This report presents description of the computer tools for support of International Reactor Pressure Vessel Materials Database developed at IAEA. Work was focused on raw, qualified, processed materials data, search, retrieval, analysis, presentation and export possibilities of data. Developed software has the following main functions: provides software tools for querying and search of any type of data in the database; provides the capability to update the existing information in the database; provides the capability to present and print selected data; provides the possibility of export on yearly basis the run-time IRPVMDB with raw, qualified and processed materials data to Database members; provides the capability to export any selected sets of raw, qualified, processed materials data

  8. Distributed systems for protecting nuclear power stations

    International Nuclear Information System (INIS)

    Jover, P.

    1980-05-01

    The advantages of distributed control systems for the control of nuclear power stations are obviously of great interest. Some years ago, EPRI, (Electric Power Research Institute) showed that multiplexing the signals is technically feasible, that it enables the availability specifications to be met and costs to be reduced. Since then, many distributed control systems have been proposed by the manufacturers. This note offers some comments on the application of the distribution concept to protection systems -what should be distributed- and ends with a brief description of a protection system based on microprocessors for the pressurized power stations now being built in France [fr

  9. Applying Distributed Object Technology to Distributed Embedded Control Systems

    DEFF Research Database (Denmark)

    Jørgensen, Bo Nørregaard; Dalgaard, Lars

    2012-01-01

    In this paper, we describe our Java RMI inspired Object Request Broker architecture MicroRMI for use with networked embedded devices. MicroRMI relieves the software developer from the tedious and error-prone job of writing communication protocols for interacting with such embedded devices. MicroR...... in developing control systems for distributed embedded platforms possessing severe resource restrictions.......RMI supports easy integration of high-level application specific control logic with low-level device specific control logic. Our experience from applying MicroRMI in the context of a distributed robotics control application, clearly demonstrates that it is feasible to use distributed object technology...

  10. State Electricity Regulatory Policy and Distributed Resources: Distribution System Cost Methodologies for Distributed Generation; Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Shirley, W.; Cowart, R.; Sedano, R.; Weston, F.; Harrington, C.; Moskovitz, D.

    2002-10-01

    Designing and implementing credit-based pilot programs for distributed resources distribution is a low-cost, low-risk opportunity to find out how these resources can help defer or avoid costly electric power system (utility grid) distribution upgrades. This report describes implementation options for deaveraged distribution credits and distributed resource development zones. Developing workable programs implementing these policies can dramatically increase the deployment of distributed resources in ways that benefit distributed resource vendors, users, and distribution utilities. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Accommodating Distributed Resources in Wholesale Markets, NREL/SR-560-32497; (2) Distributed Resources and Electric System Re liability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501.

  11. State Electricity Regulatory Policy and Distributed Resources: Distribution System Cost Methodologies for Distributed Generation

    Energy Technology Data Exchange (ETDEWEB)

    Shirley, W.; Cowart, R.; Sedano, R.; Weston, F.; Harrington, C.; Moskovitz, D.

    2002-10-01

    Designing and implementing credit-based pilot programs for distributed resources distribution is a low-cost, low-risk opportunity to find out how these resources can help defer or avoid costly electric power system (utility grid) distribution upgrades. This report describes implementation options for deaveraged distribution credits and distributed resource development zones. Developing workable programs implementing these policies can dramatically increase the deployment of distributed resources in ways that benefit distributed resource vendors, users, and distribution utilities. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Accommodating Distributed Resources in Wholesale Markets, NREL/SR-560-32497; (2) Distributed Resources and Electric System Re liability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501.

  12. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  13. Online Scheduling in Distributed Message Converter Systems

    NARCIS (Netherlands)

    Risse, Thomas; Wombacher, Andreas; Surridge, Mike; Taylor, Steve; Aberer, Karl

    The optimal distribution of jobs among hosts in distributed environments is an important factor to achieve high performance. The optimal strategy depends on the application. In this paper we present a new online scheduling strategy for distributed EDI converter system. The strategy is based on the

  14. Development of an exposure database and surveillance system for use by practicing OSH professionals.

    Science.gov (United States)

    Van Dyke, M V; LaMontagne, A D; Martyny, J W; Ruttenber, A J

    2001-02-01

    This report summarizes the development of an occupational exposure database and surveillance system for use by health and safety professionals at Rocky Flats Environmental Technology Site (RFETS), a former nuclear weapons production facility. The site itself is currently in the cleanup stage with work expected to continue into 2006. The system was developed with the intent of helping health and safety personnel not only to manage and analyze exposure monitoring data, but also to identify exposure determinants during the highly variable cleanup work. Utilizing a series of focused meetings with health and safety personnel from two of the major contractors at RFETS, core data elements were established. These data elements were selected based on their utility for analysis and identification of exposure determinants. A task-based coding scheme was employed to better define the highly variable work. The coding scheme consisted of a two-tiered hierarchical list with a total of 34 possible combinations of work type and task. The data elements were incorporated into a Microsoft Access database with built-in data entry features to both promote consistency and limit entry choices to enable stratified analyses. In designing the system, emphasis was placed on the ability of end users to perform complex analyses and multiparameter queries to identify trends in their exposure data. A very flexible and user-friendly report generator was built into the system. This report generator allowed users to perform multiparameter queries using an intuitive system with very little training. In addition, a number of automated graphical analyses were built into the system, including exposure levels by any combination of building, date, employee, job classification, type of contaminant, work type or task, exposure levels over time, exposure levels relative to the permissible exposure limit (PELS), and distributions of exposure levels. Both of these interfaces, allow the user to "drill down" or

  15. Knowledge-based assistance for science visualization and analysis using large distributed databases

    Science.gov (United States)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  16. Harmonised information exchange between decentralised food composition database systems

    DEFF Research Database (Denmark)

    Pakkala, Heikki; Christensen, Tue; Martínez de Victoria, Ignacio

    2010-01-01

    FIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object...... to those network nodes linked to the EuroFIR Web Services that will most likely have the requested information. The retrieved FCD are compiled into a specifically designed data interchange format (the EuroFIR Food Data Transport Package) in XML, which is sent back to the EuroFIR eSearch facility......Background/Objectives: The main aim of the European Food Information Resource (EuroFIR) project is to develop and disseminate a comprehensive, coherent and validated data bank for the distribution of food composition data (FCD). This can only be accomplished by harmonising food description and data...

  17. DAME: A Distributed Web Based Framework for Knowledge Discovery in Databases

    Science.gov (United States)

    Brescia, M.; Longo, G.; Castellani, M.; Cavuoti, S.; D'Abrusco, R.; Laurino, O.

    Massive data sets explored in many e-science communities, as in the Astrophysics case, are gathered by a very large number of techniques and stored in very diversified and often-incompatible data repositories. Moreover, we need to integrate services across distributed, heterogeneous, dynamic "virtual organizations" formed from the different resources within a single enterprise and/or from external resource sharing and service provider relationships. The DAME project aims at creating a distributed e-infrastructure to guarantee integrated and asynchronous access to data collected by very different experiments and scientific communities in order to correlate them and improve their scientific usability. The project consists of a data mining framework with powerful software instruments capable to work on massive data sets, organized by following Virtual Observatory standards, in a distributed computing environment. The integration process can be technically challenging because of the need to achieve a specific quality of service when running on top of different native platforms. In these terms, the result of the DAME project effort is a service-oriented architecture, by using appropriate standards and incorporating Cloud/Grid paradigms and Web services, that will have as main target the integration of interdisciplinary distributed systems within and across organizational domains.

  18. Control and Operation of Islanded Distribution System

    DEFF Research Database (Denmark)

    Mahat, Pukar

    A yearly demand growth of less than 3%, concern about the environment, and various benefits of onsite generation have all resulted in a significant increase in penetration of dispersed and distributed generation (DG) in many distribution systems. This has also resulted in some power system...... operational challenges. But, on the other hand, it has also opened up some opportunities. One opportunity/challenge is an islanded operation of a distribution system with DG unit(s). Islanding is a situation in which a distribution system becomes electrically isolated from the remainder of the power system...... and yet continues to be energized by DG unit(s) connected to it. Currently, it is seen as a challenge and so far all DG units need to shut down when a distribution system is islanded. However, with the DG penetration expected to increase sharply, islanding is an opportunity to improve the reliability...

  19. Functional integration of automated system databases by means of artificial intelligence

    Science.gov (United States)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  20. Models and analysis for distributed systems

    CERN Document Server

    Haddad, Serge; Pautet, Laurent; Petrucci, Laure

    2013-01-01

    Nowadays, distributed systems are increasingly present, for public software applications as well as critical systems. software applications as well as critical systems. This title and Distributed Systems: Design and Algorithms - from the same editors - introduce the underlying concepts, the associated design techniques and the related security issues.The objective of this book is to describe the state of the art of the formal methods for the analysis of distributed systems. Numerous issues remain open and are the topics of major research projects. One current research trend consists of pro

  1. Strategy Guideline. Compact Air Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Burdick, Arlan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-06-01

    This guideline discusses the benefits and challenges of using a compact air distribution system to handle the reduced loads and reduced air volume needed to condition the space within an energy efficient home. The decision criteria for a compact air distribution system must be determined early in the whole-house design process, considering both supply and return air design. However, careful installation of a compact air distribution system can result in lower material costs from smaller equipment, shorter duct runs, and fewer outlets; increased installation efficiencies, including ease of fitting the system into conditioned space; lower loads on a better balanced HVAC system, and overall improved energy efficiency of the home.

  2. Formal Specification of Distributed Information Systems

    NARCIS (Netherlands)

    Vis, J.; Brinksma, Hendrik; de By, R.A.; de By, R.A.

    The design of distributed information systems tends to be complex and therefore error-prone. However, in the field of monolithic, i.e. non-distributed, information systems much has already been achieved, and by now, the principles of their design seem to be fairly well-understood. The past decade

  3. RF phase distribution systems at the SLC

    International Nuclear Information System (INIS)

    Jobe, R.K.; Schwarz, H.D.

    1989-04-01

    Modern large linear accelerators require RF distribution systems with minimal phase drifts and errors. Through the use of existing RF coaxial waveguides, and additional installation of phase reference cables and monitoring equipment, stable RF distribution for the SLC has been achieved. This paper discusses the design and performance of SLAC systems, and some design considerations for future colliders. 6 refs., 4 figs

  4. Hybrid solar lighting distribution systems and components

    Science.gov (United States)

    Muhs, Jeffrey D [Lenoir City, TN; Earl, Dennis D [Knoxville, TN; Beshears, David L [Knoxville, TN; Maxey, Lonnie C [Powell, TN; Jordan, John K [Oak Ridge, TN; Lind, Randall F [Lenoir City, TN

    2011-07-05

    A hybrid solar lighting distribution system and components having at least one hybrid solar concentrator, at least one fiber receiver, at least one hybrid luminaire, and a light distribution system operably connected to each hybrid solar concentrator and each hybrid luminaire. A controller operates all components.

  5. Decentralized Control of Scheduling in Distributed Systems.

    Science.gov (United States)

    1983-03-18

    and Experience, 7,1, January 1977, 3-35. [WITT80] Wittie, L., and Andre M. Van Tilborg, "MICROS, A Distributed Operating System for Micronet , A...Andre M. van Tilborg, "MICROS, A Distributed Operating System for Micronet , A Reconfigurable Network Computer, = TCanaa.ion 2n Coinuters," Vol. C-29

  6. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  7. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  8. Information system for administrating and distributing color images through internet

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The information system for administrating and distributing color images through the Internet ensures the consistent replication of color images, their storage - in an on-line data base - and predictable distribution, by means of a digitally distributed flow, based on Windows platform and POD (Print On Demand technology. The consistent replication of color images inde-pendently from the parameters of the processing equipment and from the features of the programs composing the technological flow, is ensured by the standard color management sys-tem defined by ICC (International Color Consortium, which is integrated by the Windows operation system and by the POD technology. The latter minimize the noticeable differences between the colors captured, displayed or printed by various replication equipments and/or edited by various graphical applications. The system integrated web application ensures the uploading of the color images in an on-line database and their administration and distribution among the users via the Internet. For the preservation of the data expressed by the color im-ages during their transfer along a digitally distributed flow, the software application includes an original tool ensuring the accurate replication of colors on computer displays or when printing them by means of various color printers or presses. For development and use, this application employs a hardware platform based on PC support and a competitive software platform, based on: the Windows operation system, the .NET. Development medium and the C# programming language. This information system is beneficial for creators and users of color images, the success of the printed or on-line (Internet publications depending on the sizeable, predictable and accurate replication of colors employed for the visual expression of information in every activity fields of the modern society. The herein introduced information system enables all interested persons to access the

  9. Reliability assessment of distribution power systems including distributed generations

    International Nuclear Information System (INIS)

    Megdiche, M.

    2004-12-01

    Nowadays, power systems have reached a good level of reliability. Nevertheless, considering the modifications induced by the connections of small independent producers to distribution networks, there's a need to assess the reliability of these new systems. Distribution networks present several functional characteristics, highlighted by the qualitative study of the failures, as dispersed loads at several places, variable topology and some electrotechnical phenomena which must be taken into account to model the events that can occur. The adopted reliability calculations method is Monte Carlo simulations, the probabilistic method most powerful and most flexible to model complex operating of the distribution system. We devoted a first part on the case of a 20 kV feeder to which a cogeneration unit is connected. The method was applied to a software of stochastic Petri nets simulations. Then a second part related to the study of a low voltage power system supplied by dispersed generations. Here, the complexity of the events required to code the method in an environment of programming allowing the use of power system calculations (load flow, short-circuit, load shedding, management of units powers) in order to analyse the system state for each new event. (author)

  10. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  11. National Carbon Sequestration Database and Geographic Information System (NatCarb)

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Nelson; Timothy Carr

    2009-03-31

    This annual and final report describes the results of the multi-year project entitled 'NATional CARBon Sequestration Database and Geographic Information System (NatCarb)' (http://www.natcarb.org). The original project assembled a consortium of five states (Indiana, Illinois, Kansas, Kentucky and Ohio) in the midcontinent of the United States (MIDCARB) to construct an online distributed Relational Database Management System (RDBMS) and Geographic Information System (GIS) covering aspects of carbon dioxide (CO{sub 2}) geologic sequestration. The NatCarb system built on the technology developed in the initial MIDCARB effort. The NatCarb project linked the GIS information of the Regional Carbon Sequestration Partnerships (RCSPs) into a coordinated regional database system consisting of datasets useful to industry, regulators and the public. The project includes access to national databases and GIS layers maintained by the NatCarb group (e.g., brine geochemistry) and publicly accessible servers (e.g., USGS, and Geography Network) into a single system where data are maintained and enhanced at the local level, but are accessed and assembled through a single Web portal to facilitate query, assembly, analysis and display. This project improves the flow of data across servers and increases the amount and quality of available digital data. The purpose of NatCarb is to provide a national view of the carbon capture and storage potential in the U.S. and Canada. The digital spatial database allows users to estimate the amount of CO{sub 2} emitted by sources (such as power plants, refineries and other fossil-fuel-consuming industries) in relation to geologic formations that can provide safe, secure storage sites over long periods of time. The NatCarb project worked to provide all stakeholders with improved online tools for the display and analysis of CO{sub 2} carbon capture and storage data through a single website portal (http://www.natcarb.org/). While the external

  12. Measuring the mass distribution in stellar systems

    Science.gov (United States)

    Tremaine, Scott

    2018-03-01

    One of the fundamental tasks of dynamical astronomy is to infer the distribution of mass in a stellar system from a snapshot of the positions and velocities of its stars. The usual approach to this task (e.g., Schwarzschild's method) involves fitting parametrized forms of the gravitational potential and the phase-space distribution to the data. We review the practical and conceptual difficulties in this approach and describe a novel statistical method for determining the mass distribution that does not require determining the phase-space distribution of the stars. We show that this new estimator out-performs other distribution-free estimators for the harmonic and Kepler potentials.

  13. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  14. Agents-based distributed processes control systems

    Directory of Open Access Journals (Sweden)

    Adrian Gligor

    2011-12-01

    Full Text Available Large industrial distributed systems have revealed a remarkable development in recent years. We may note an increase of their structural and functional complexity, at the same time with those on requirements side. These are some reasons why there are involvednumerous researches, energy and resources to solve problems related to these types of systems. The paper addresses the issue of industrial distributed systems with special attention being given to the distributed industrial processes control systems. A solution for a distributed process control system based on mobile intelligent agents is presented.The main objective of the proposed system is to provide an optimal solution in terms of costs, maintenance, reliability and flexibility. The paper focuses on requirements, architecture, functionality and advantages brought by the proposed solution.

  15. Evaluation of two typical distributed energy systems

    Science.gov (United States)

    Han, Miaomiao; Tan, Xiu

    2018-03-01

    According to the two-natural gas distributed energy system driven by gas engine driven and gas turbine, in this paper, the first and second laws of thermodynamics are used to measure the distributed energy system from the two parties of “quantity” and “quality”. The calculation results show that the internal combustion engine driven distributed energy station has a higher energy efficiency, but the energy efficiency is low; the gas turbine driven distributed energy station energy efficiency is high, but the primary energy utilization rate is relatively low. When configuring the system, we should determine the applicable natural gas distributed energy system technology plan and unit configuration plan according to the actual load factors of the project and the actual factors such as the location, background and environmental requirements of the project. “quality” measure, the utilization of waste heat energy efficiency index is proposed.

  16. PFS: a distributed and customizable file system

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file

  17. Using a Materials Database System as the Backbone for a Certified Quality System (AS/NZS ISO 9001:1994) for a Distance Education Centre.

    Science.gov (United States)

    Hughes, Norm

    The Distance Education Center (DEC) of the University of Southern Queensland (Australia) has developed a unique materials database system which is used to monitor pre-production, design and development, production and post-production planning, scheduling, and distribution of all types of materials including courses offered only on the Internet. In…

  18. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    Science.gov (United States)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  19. CRITICAL ASSESSMENT OF AUDITING CONTRIBUTIONS TO EFFECTIVE AND EFFICIENT SECURITY IN DATABASE SYSTEMS

    OpenAIRE

    Olumuyiwa O. Matthew; Carl Dudley

    2015-01-01

    Database auditing has become a very crucial aspect of security as organisations increase their adoption of database management systems (DBMS) as major asset that keeps, maintain and monitor sensitive information. Database auditing is the group of activities involved in observing a set of stored data in order to be aware of the actions of users. The work presented here outlines the main auditing techniques and methods. Some architectural based auditing systems were also consider...

  20. Multifunctional Information Distribution System (MIDS)

    Science.gov (United States)

    2015-12-01

    MIDS-LVT) and MIDS Joint Tactical Radio System (MIDS JTRS). The MIDS-LVT is the product of the MIDS International Program Office ( IPO ), a...for the MIDS International Program Office ( IPO ) and concurrence on financial procedures for FY 2016. The next MIDS Steering Committee #55 was held

  1. A DISTRIBUTED SMART HOME ARTIFICIAL INTELLIGENCE SYSTEM

    DEFF Research Database (Denmark)

    Lynggaard, Per

    2013-01-01

    A majority of the research performed today explore artificial intelligence in smart homes by using a centralized approach where a smart home server performs the necessary calculations. This approach has some disadvantages that can be overcome by shifting focus to a distributed approach where...... the artificial intelligence system is implemented as distributed as agents running parts of the artificial intelligence system. This paper presents a distributed smart home architecture that distributes artificial intelligence in smart homes and discusses the pros and cons of such a concept. The presented...... distributed model is a layered model. Each layer offers a different complexity level of the embedded distributed artificial intelligence. At the lowest layer smart objects exists, they are small cheap embedded microcontroller based smart devices that are powered by batteries. The next layer contains a more...

  2. Integrating photovoltaics into utility distribution systems

    International Nuclear Information System (INIS)

    Zaininger, H.W.; Barnes, P.R.

    1995-01-01

    Electric utility distribution system impacts associated with the integration of distributed photovoltaic (PV) energy sources vary from site to site and utility to utility. The objective of this paper is to examine several utility- and site-specific conditions which may affect economic viability of distributed PV applications to utility systems. Assessment methodology compatible with technical and economic assessment techniques employed by utility engineers and planners is employed to determine PV benefits for seven different utility systems. The seven case studies are performed using utility system characteristics and assumptions obtained from appropriate utility personnel. The resulting site-specific distributed PV benefits increase nonsite-specific generation system benefits available to central station PV plants as much as 46%, for one utility located in the Southwest

  3. Cronus: A Distributed Operating System.

    Science.gov (United States)

    1983-11-01

    ability of the system to meet specific organization objectives. -82- .. Musa =": Report No. 5086 Bolt Beranek and Newman Inc. 2.7 Substitutability of... extracted with the OriginOfUNO operator. All UNO’s generated by the same host are strictly ordered by time of creation, and can be compared using the...of data (16 words) by causing a microinterrupt. The service routine microcode tells the controller to proceed with pseudo- DNA . Then the I/O controller

  4. Advanced smartgrids for distribution system operators

    CERN Document Server

    Boillot, Marc

    2014-01-01

    The dynamic of the Energy Transition is engaged in many region of the World. This is a real challenge for electric systems and a paradigm shift for existing distribution networks. With the help of "advanced" smart technologies, the Distribution System Operators will have a central role to integrate massively renewable generation, electric vehicle and demand response programs. Many projects are on-going to develop and assess advanced smart grids solutions, with already some lessons learnt. In the end, the Smart Grid is a mean for Distribution System Operators to ensure the quality and the secu

  5. 18th East European Conference on Advances in Databases and Information Systems and Associated Satellite Events

    CERN Document Server

    Ivanovic, Mirjana; Kon-Popovska, Margita; Manolopoulos, Yannis; Palpanas, Themis; Trajcevski, Goce; Vakali, Athena

    2015-01-01

    This volume contains the papers of 3 workshops and the doctoral consortium, which are organized in the framework of the 18th East-European Conference on Advances in Databases and Information Systems (ADBIS’2014). The 3rd International Workshop on GPUs in Databases (GID’2014) is devoted to subjects related to utilization of Graphics Processing Units in database environments. The use of GPUs in databases has not yet received enough attention from the database community. The intention of the GID workshop is to provide a discussion on popularizing the GPUs and providing a forum for discussion with respect to the GID’s research ideas and their potential to achieve high speedups in many database applications. The 3rd International Workshop on Ontologies Meet Advanced Information Systems (OAIS’2014) has a twofold objective to present: new and challenging issues in the contribution of ontologies for designing high quality information systems, and new research and technological developments which use ontologie...

  6. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    Science.gov (United States)

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  7. An Implementation of a Database System for Book Loan in an ...

    African Journals Online (AJOL)

    This study examined the design and implementation of database on Book Loan Systems in an academic Library using The Polytechnic Ibadan Library as a case study. The system was developed using relational database architecture with storage medium being Microsoft Access while the Visual Basic is used to query the ...

  8. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  9. Science network resources: Distributed systems

    Science.gov (United States)

    Cline, Neal

    1991-01-01

    The Master Directory, which is overview information about whole data sets, is outlined. The data system environment is depicted. The question is explored of what is a prototype international directory including purpose and features. Advantages of on-line directories are listed. Interconnected directory assumptions are given. A description of given of DIF (Directory Interchange Format), which is an exchange file for directory information, along with information content of DIF and directories. The directory population status is given in a percentage viewgraph. The present and future directory interconnections status at GSFC is also listed.

  10. Distributed context-aware systems

    CERN Document Server

    Ferreira, Paulo

    2014-01-01

    Context-aware systems aim to deliver a rich user experience by taking into?account the current user context (location, time, activity, etc.), possibly?captured without his intervention. For example, cell phones are now able to?continuously update a user's location while, at the same time, users execute?an increasing amount of activities online, where their actions may be easily?captured (e.g. login in a web application) without user consent. In the last decade, this topic has seen numerous developments that demonstrate its relevance and usefulness. The?trend was accelerated with the widespread

  11. PROWAY - a standard for distributed control systems

    International Nuclear Information System (INIS)

    Gellie, R.W.

    1980-01-01

    The availability of cheap and powerful microcomputer and data communications equipment has led to a major revision of instrumentation and control systems. Intelligent devices can now be used and distributed about the control system in a systematic and economic manner. These sub-units are linked by a communications system to provide a total system capable of meeting the required plant objectives. PROWAY, an international standard process data highway for interconnecting processing units in distributed industrial process control systems, is currently being developed. This paper describes the salient features and current status of the PROWAY effort. (auth)

  12. Distributed Web-Based Control System

    Directory of Open Access Journals (Sweden)

    Reinhard Langmann

    2010-08-01

    Full Text Available The paper describes a concept and application examples for a distributed Web-based control system (DWCS. The DWCS is based of two key components: an IEC 61131-programmable Web control and a process data proxy as the process interface. Control functions can be distributed and executed ad lib in the Intranet/Internet via the DWCS.

  13. Distance Protection for Microgrids in Distribution System

    DEFF Research Database (Denmark)

    Lin, Hengwei; Liu, Chengxi; Guerrero, Josep M.

    2015-01-01

    Owing to the increasing penetration of distributed generation, there are some challenges for the conventional protection in distribution system. Bidirectional power flow and variable fault current because of the various operation modes may lead to the selectivity and sensitivity of the overcurrent...

  14. Water distribution systems design optimisation using metaheuristics ...

    African Journals Online (AJOL)

    The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...

  15. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.

    1989-01-01

    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  16. Distributed Administrative Management Information System (DAMIS).

    Science.gov (United States)

    Juckiewicz, Robert; Kroculick, Joseph

    Columbia University's major program to distribute its central administrative data processing to its various schools and departments is described. The Distributed Administrative Management Information System (DAMIS) will link every department and school within the university via micrcomputers, terminals, and/or minicomputers to the central…

  17. The online database MaarjAM reveals global and ecosystemic distribution patterns in arbuscular mycorrhizal fungi (Glomeromycota).

    Science.gov (United States)

    Opik, M; Vanatoa, A; Vanatoa, E; Moora, M; Davison, J; Kalwij, J M; Reier, U; Zobel, M

    2010-10-01

    • Here, we describe a new database, MaarjAM, that summarizes publicly available Glomeromycota DNA sequence data and associated metadata. The goal of the database is to facilitate the description of distribution and richness patterns in this group of fungi. • Small subunit (SSU) rRNA gene sequences and available metadata were collated from all suitable taxonomic and ecological publications. These data have been made accessible in an open-access database (http://maarjam.botany.ut.ee). • Two hundred and eighty-two SSU rRNA gene virtual taxa (VT) were described based on a comprehensive phylogenetic analysis of all collated Glomeromycota sequences. Two-thirds of VT showed limited distribution ranges, occurring in single current or historic continents or climatic zones. Those VT that associated with a taxonomically wide range of host plants also tended to have a wide geographical distribution, and vice versa. No relationships were detected between VT richness and latitude, elevation or vascular plant richness. • The collated Glomeromycota molecular diversity data suggest limited distribution ranges in most Glomeromycota taxa and a positive relationship between the width of a taxon's geographical range and its host taxonomic range. Inconsistencies between molecular and traditional taxonomy of Glomeromycota, and shortage of data from major continents and ecosystems, are highlighted.

  18. Man-systems distributed system for Space Station Freedom

    Science.gov (United States)

    Lewis, J. L.

    1990-01-01

    Viewgraphs on man-systems distributed system for Space Station Freedom are presented. Topics addressed include: description of man-systems (definition, requirements, scope, subsystems, and topologies); implementation (approach, tools); man-systems interfaces (system to element and system to system); prime/supporting development relationship; selected accomplishments; and technical challenges.

  19. Supervisory Control and Diagnostics System Distributed Operating System

    International Nuclear Information System (INIS)

    McGoldrick, P.R.

    1979-01-01

    This paper contains a description of the Supervisory Control and Diagnostics System (SCDS) Distributed Operating System. The SCDS consists of nine 32-bit minicomputers with shared memory. The system's main purpose is to control a large Mirror Fusion Test Facility

  20. DIstributed VIRtual System (DIVIRS) Project

    Science.gov (United States)

    Schorr, Herbert; Neuman, B. Clifford; Gaines, Stockton R.; Mizell, David

    1996-01-01

    The development of Prospero moved from the University of Washington to ISI and several new versions of the software were released from ISI during the contract period. Changes in the first release from ISI included bug fixes and extensions to support the needs of specific users. Among these changes was a new option to directory queries that allows attributes to be returned for all files in a directory together with the directory listing. This change greatly improves the performance of their server and reduces the number of packets sent across their trans-pacific connection to the rest of the internet. Several new access method were added to the Prospero file method. The Prospero Data Access Protocol was designed, to support secure retrieval of data from systems running Prospero.

  1. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  2. A user's manual for the database management system of impact property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kong, W. S.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the impact database system for managing the impact property test data. The data base constructed the data produced from impact property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The impact database system was developed by internet method using jsp(Java Server pages) tool

  3. AC distribution system for TFTR pulsed loads

    International Nuclear Information System (INIS)

    Carroll, R.F.; Ramakrishnan, S.; Lemmon, G.N.; Moo, W.I.

    1977-01-01

    This paper outlines the AC distribution system associated with the Tokamak Fusion Test Reactor and discusses the significant areas related to design, protection, and equipment selection, particularly where there is a departure from normal utility and industrial applications

  4. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  5. Distribution system protection with communication technologies

    DEFF Research Database (Denmark)

    Wei, Mu; Chen, Zhe

    2010-01-01

    Due to the communication technologies’ involvement in the distribution power system, the time-critical protection function may be implemented more accurately, therefore distribution power systems’ stability, reliability and security could be improved. This paper presents an active distribution...... power system, including CHPs (Combined Heating and Power) and small scaled WTs (Wind Turbines), as a practical example to examine the possible impacts of communication technologies on the power system. Under some fault scenarios, the power system’s responses to the fault are compared between the system...... with communication technologies and that without communication technologies. At the same time, the previously proposed study method of combining the simulations of communication and power systems is adopted in this study. The performance of a communication network adopted for power system is simulated by OPNET...

  6. Assessment System for Aircraft Noise (ASAN) Citation Database. Volume 2

    Science.gov (United States)

    1989-12-01

    and Technologies Corporation. The animal effects database was compiled and summarized by Ms. Ann E. Bowles of SWRI/Hubbs Marine Research Center and...Ictaluridae 01.07.12.01.01.00 Ictalurus 01.07.12.01.01.01 * Ictalurus pricei 01.02.21.15.00.00 Icteridae 01.03.02.01.00.00 Iguanas 01.03.02.01.00.00...Turtles, softshell 01.03.02.00.00.00 .. Amphisbaenians Lizards Snakes Squamata Worm-lizards 01.03.02.01.00.00 ... Anoles ... Iguanas ... Iguanidae

  7. CeCaFDB: a curated database for the documentation, visualization and comparative analysis of central carbon metabolic flux distributions explored by 13C-fluxomics.

    Science.gov (United States)

    Zhang, Zhengdong; Shen, Tie; Rui, Bin; Zhou, Wenwei; Zhou, Xiangfei; Shang, Chuanyu; Xin, Chenwei; Liu, Xiaoguang; Li, Gang; Jiang, Jiansi; Li, Chao; Li, Ruiyuan; Han, Mengshu; You, Shanping; Yu, Guojun; Yi, Yin; Wen, Han; Liu, Zhijie; Xie, Xiaoyao

    2015-01-01

    The Central Carbon Metabolic Flux Database (CeCaFDB, available at http://www.cecafdb.org) is a manually curated, multipurpose and open-access database for the documentation, visualization and comparative analysis of the quantitative flux results of central carbon metabolism among microbes and animal cells. It encompasses records for more than 500 flux distributions among 36 organisms and includes information regarding the genotype, culture medium, growth conditions and other specific information gathered from hundreds of journal articles. In addition to its comprehensive literature-derived data, the CeCaFDB supports a common text search function among the data and interactive visualization of the curated flux distributions with compartmentation information based on the Cytoscape Web API, which facilitates data interpretation. The CeCaFDB offers four modules to calculate a similarity score or to perform an alignment between the flux distributions. One of the modules was built using an inter programming algorithm for flux distribution alignment that was specifically designed for this study. Based on these modules, the CeCaFDB also supports an extensive flux distribution comparison function among the curated data. The CeCaFDB is strenuously designed to address the broad demands of biochemists, metabolic engineers, systems biologists and members of the -omics community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. DCBITS: Distributed Case Base Intelligent Tutoring System

    Science.gov (United States)

    Rishi, O. P.; Govil, Rekha

    2008-05-01

    Online learning with Intelligent Tutoring System (ITS) is becoming very popular where the system models the student's learning behavior and presents to the student the learning material (content, questions-answers, assignments) accordingly. In today's distributed computing environment, the tutoring system can take advantage of networking to utilize the model for a student for students from other similar groups. In the present paper we present a methodology where using Case Based Reasoning (CBR), ITS provides student modeling for online learning in a distributed environment with the help of agents. Student modeling is a key component in any Intelligent Tutoring System (ITS). In today's distributed computing environment, the tutoring system can take advantage of networking to utilize the model for a student for students from other similar group. In the present paper we present a methodology where using Case Based Reasoning (CBR), the ITS provides student modeling for online learning in a distributed environment with the help of agents. The Chapter describes the approach, the architecture, and the agent characteristics for the student modeling in the ITS. This concept can be deployed to develop ITS where the tutor can author and the students can learn locally whereas the ITS can model the students' learning globally in a distributed environment. The advantage of such an approach is that both the learning material (domain knowledge) and students' model can be globally distributed thus enhancing the efficiency of ITS with reducing the bandwidth requirement and complexity of the system.

  9. Telecommunications issues of intelligent database management for ground processing systems in the EOS era

    Science.gov (United States)

    Touch, Joseph D.

    1994-01-01

    Future NASA earth science missions, including the Earth Observing System (EOS), will be generating vast amounts of data that must be processed and stored at various locations around the world. Here we present a stepwise-refinement of the intelligent database management (IDM) of the distributed active archive center (DAAC - one of seven regionally-located EOSDIS archive sites) architecture, to showcase the telecommunications issues involved. We develop this architecture into a general overall design. We show that the current evolution of protocols is sufficient to support IDM at Gbps rates over large distances. We also show that network design can accommodate a flexible data ingestion storage pipeline and a user extraction and visualization engine, without interference between the two.

  10. Strategy Guideline: Compact Air Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Burdick, A.

    2013-06-01

    This Strategy Guideline discusses the benefits and challenges of using a compact air distribution system to handle the reduced loads and reduced air volume needed to condition the space within an energy efficient home. Traditional systems sized by 'rule of thumb' (i.e., 1 ton of cooling per 400 ft2 of floor space) that 'wash' the exterior walls with conditioned air from floor registers cannot provide appropriate air mixing and moisture removal in low-load homes. A compact air distribution system locates the HVAC equipment centrally with shorter ducts run to interior walls, and ceiling supply outlets throw the air toward the exterior walls along the ceiling plane; alternatively, high sidewall supply outlets throw the air toward the exterior walls. Potential drawbacks include resistance from installing contractors or code officials who are unfamiliar with compact air distribution systems, as well as a lack of availability of low-cost high sidewall or ceiling supply outlets to meet the low air volumes with good throw characteristics. The decision criteria for a compact air distribution system must be determined early in the whole-house design process, considering both supply and return air design. However, careful installation of a compact air distribution system can result in lower material costs from smaller equipment, shorter duct runs, and fewer outlets; increased installation efficiencies, including ease of fitting the system into conditioned space; lower loads on a better balanced HVAC system, and overall improved energy efficiency of the home.

  11. Design and Implementation of an Enterprise Information System Utilizing a Component Based Three-Tier Client/Server Database System

    National Research Council Canada - National Science Library

    Akbay, Murat

    1999-01-01

    The Naval Security Group currently requires a modem architecture to merge existing command databases into a single Enterprise Information System through which each command may manipulate administrative data...

  12. Reliability Improvement of Power Distribution Systems using Advanced Distribution Automation

    Directory of Open Access Journals (Sweden)

    M. R. Elkadeem

    2017-03-01

    Full Text Available Towards the complete vision of smarter distribution grid, advanced distribution automation system (ADAS is one of the major players in this area. In this scope, this paper introduces a generic strategy for cost-effective implementation and evaluation of ADAS. Along with the same line, fault location, isolation and service restoration (FLISR is one of the most beneficial and desirable applications of ADAS for self-healing and reliability improvement. Therefore, a local-centralized-based FLISR (LC-FLISR architecture is implemented on a real, urban, underground medium voltage distribution network. For the investigated network, the complete procedure and structure of the LC-FLISR are presented. Finally, the level of reliability improvement and customers’ satisfaction enhancement are evaluated. The results are presented in the form of a comparative study between the proposed automated and non-automated distribution networks. The results show that the automated network with proposed ADAS has a considerable benefit through a significant reduction in reliability indices. In addition, it has remarkable benefits observed from increasing customers’ satisfaction and reducing penalties from industry regulators.

  13. Subsurface interpretation based on geophysical data set using geothermal database system `GEOBASE`; Chinetsu database system `GEOBASE` wo riyoshita Kakkonda chinetsu chiiki no chika kozo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Osato, K.; Sato, T.; Miura, Y.; Yamane, K. [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Doi, N. [Japan Metals and Chemicals Co. Ltd., Tokyo (Japan); Uchida, T. [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)

    1996-05-01

    This paper reports application of a geothermal database system (GEOBASE) to analyzing subsurface structure in the Kakkonda geothermal area. Registered into the GEOBASE to analyze specific resistance structure in this area were depth information (well track and electric logging of existing wells), three-dimensional discretization data (two-dimensional analysis cross section using the MT method and distribution of micro-earthquake epicenters), and two-dimensional discretization data (altitude, and depth to top of the Kakkonda granite). The GEOBASE is capable of three-dimensional interpolation and three-dimensional indication respectively on the three-dimensional discretization data and the depth information table. The paper presents a depth compiling plan drawing for 2000 m below sea level and an SE-NE cross section compiling cross sectional drawing. The paper also indicates that the three-dimensional interpolation function of the GEOBASE renders comparison of spatial data capable of being done freely and quickly, thereby exhibiting power in the comprehensive analysis of this kind. 3 refs., 8 figs., 2 tabs.

  14. Database System Design and Implementation for Marine Air-Traffic-Controller Training

    Science.gov (United States)

    2017-06-01

    IMPLEMENTATION FOR MARINE AIR-TRAFFIC- CONTROLLER TRAINING by Anthony J. Ambriz June 2017 Thesis Advisor: Neil C. Rowe Second Reader: Arijit Das THIS...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC- CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...electronic database for air-traffic- controller (ATC) training within the Marine Corps. The Marine Corps currently has individual electronic training database

  15. A survey of the use of database management systems in accelerator projects

    OpenAIRE

    Poole, John; Strubin, Pierre M

    1995-01-01

    The International Accelerator Database Group (IADBG) was set up in 1994 to bring together the people who are working with databases in accelerator laboratories so that they can exchange information and experience. The group now has members from more than 20 institutes from all around the world, representing nearly double this number of projects. This paper is based on the information gathered by the IADBG and describes why commercial DataBase Management Systems (DBMS) are being used in accele...

  16. Distributed redundancy and robustness in complex systems

    KAUST Repository

    Randles, Martin

    2011-03-01

    The uptake and increasing prevalence of Web 2.0 applications, promoting new large-scale and complex systems such as Cloud computing and the emerging Internet of Services/Things, requires tools and techniques to analyse and model methods to ensure the robustness of these new systems. This paper reports on assessing and improving complex system resilience using distributed redundancy, termed degeneracy in biological systems, to endow large-scale complicated computer systems with the same robustness that emerges in complex biological and natural systems. However, in order to promote an evolutionary approach, through emergent self-organisation, it is necessary to specify the systems in an \\'open-ended\\' manner where not all states of the system are prescribed at design-time. In particular an observer system is used to select robust topologies, within system components, based on a measurement of the first non-zero Eigen value in the Laplacian spectrum of the components\\' network graphs; also known as the algebraic connectivity. It is shown, through experimentation on a simulation, that increasing the average algebraic connectivity across the components, in a network, leads to an increase in the variety of individual components termed distributed redundancy; the capacity for structurally distinct components to perform an identical function in a particular context. The results are applied to a specific application where active clustering of like services is used to aid load balancing in a highly distributed network. Using the described procedure is shown to improve performance and distribute redundancy. © 2010 Elsevier Inc.

  17. Support for User Interfaces for Distributed Systems

    Science.gov (United States)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  18. Model checking software for phylogenetic trees using distribution and database methods.

    Science.gov (United States)

    Requeno, José Ignacio; Colom, José Manuel

    2013-12-01

    Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence) and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  19. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  20. Intelligent Control and Operation of Distribution System

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad

    in this direction but also benefit distribution system operators in the planning and development of the distribution network. The major contributions of this work are described in the following four stages: In the first stage, an intelligent Demand Response (DR) control architecture is developed for coordinating......, an adaptive overcurrent protection is developed for the future distribution system having high share of RESs and Active Network Management (ANM) activities. In the future grid, the protection is affected not only by bidirectional power flow due to RESs but also due to the ANM activities such as demand....... This not only enables the end consumers to get reliable and cheap electricity but also enables the utility to prevent huge investment in counterpart. Moreover, distribution system operators can implement the findings of the projects in their operational stages to avoid grid bottlenecks and in the planning...

  1. Comparison between Different Air Distribution Systems

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    The aim of an air conditioning system is to remove excess heat in a room and replace room air with fresh air to obtain a high air quality. It is not sufficient to remove heat and contaminated air, it is also necessary to distribute and control the air movement in the room to create thermal comfort...... in the occupied zone. Most air distribution systems are based on mixing ventilation with ceiling or wall-mounted diffusers or on displacement ventilation with wall-mounted low velocity diffusers. New principles for room air distribution were introduced during the last decades, as the textile terminals mounted...... in the ceiling and radial diffusers with swirling flow also mounted in the ceiling. This paper addresses five air distribution systems in all, namely mixing ventilation from a wallmounted terminal, mixing ventilation from a ceiling-mounted diffuser, mixing ventilation from a ceiling-mounted diffuser...

  2. Integrating CLIPS applications into heterogeneous distributed systems

    Science.gov (United States)

    Adler, Richard M.

    1991-01-01

    SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

  3. The realization of high-speed scalable database software in the nuclear instrumentation and control system

    International Nuclear Information System (INIS)

    Qian Min; Peng Li; Fu Chunxia

    2014-01-01

    With the DCS of nuclear plant localization process further, requiring the DCS of nuclear plant has better properties. The database is the core part of DCS in LEVEL2. This article describes the database software in the nuclear dedicated instrumentation and control system belonging to CTEC. It has better performance and usability. This article includes the following aspects: (l) The database contains the data dictionary and index table. Database can high-speed read and write through data dictionary. (2) Through point subscribe and caching mechanism, you can speed read data from the database software cycle. (3) Through mapping by point's item, make sure the high speed of the nuclear power plant data acquisition. (4) Users can self-define database point's type and point's item attributes, so the database has very good scalability. (5) The data trigger event is scalable. Finally, this article describes the database software in the practical application of Hongyanhe and Li-ao nuclear power station's KDO/KME system. This article has a very important practical significance for the research and development of domestic database software in the nuclear dedicated instrumentation and control system. (authors)

  4. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  5. The Database and Data Analysis Software of Radiation Monitoring System

    International Nuclear Information System (INIS)

    Wang Weizhen; Li Jianmin; Wang Xiaobing; Hua Zhengdong; Xu Xunjiang

    2009-01-01

    Shanghai Synchrotron Radiation Facility (SSRF for short) is a third-generation light source building in China, including a 150MeV injector, 3.5GeV booster, 3.5GeV storage ring and an amount of beam line stations. The data is fetched by the monitoring computer from collecting modules in the front end, and saved in the MySQL database in the managing computer. The data analysis software is coded with Python, a script language, to inquire, summarize and plot the data of a certain monitoring channel during a certain period and export to an external file. In addition, the warning event can be inquired separately. The website for historical and real-time data inquiry and plotting is coded with PHP. (authors)

  6. Studies on preparation of the database system for clinical records of atomic bomb survivors

    International Nuclear Information System (INIS)

    Nakamura, Tsuyoshi

    1981-01-01

    Construction of the database system aimed at multipurpose application of data on clinical medicine was studied through the preparation of database system for clinical records of atomic bomb survivors. The present database includes the data about 110,000 atomic bomb survivors in Nagasaki City. This study detailed: (1) Analysis of errors occurring in a period from generation of data in the clinical field to input into the database, and discovery of a highly precise, effective method of input. (2) Development of a multipurpose program for uniform processing of data on physical examinations from many organizations. (3) Development of a record linkage method for voluminous files which are essential in the construction of a large-scale medical information system. (4) A database model suitable for clinical research and a method for designing a segment suitable for physical examination data. (Chiba, N.)

  7. Computer systems and methods for the query and visualization of multidimensional database

    Science.gov (United States)

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  8. The evolution of a distributed operating system

    NARCIS (Netherlands)

    van Renesse, Robbert; Tanenbaum, Andrew S.; Mullender, Sape J.; Schröder-Preikschat, Wolfgang; Zimmer, Wolfgang

    AMOEBA is a research project to build a true distributed operating system using the object model. Under the COST11-ter MANDIS project this work was extended to cover wide-area networks. Besides describing the system, this paper discusses the successive versions in the implementation of its model,

  9. Distilled Water Distribution Systems. Laboratory Design Notes.

    Science.gov (United States)

    Sell, J.C.

    Factors concerning water distribution systems, including an evaluation of materials and a recommendation of materials best suited for service in typical facilities are discussed. Several installations are discussed in an effort to bring out typical features in selected applications. The following system types are included--(1) industrial…

  10. Expert System Detects Power-Distribution Faults

    Science.gov (United States)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  11. Performance of the Amoeba Distributed Operating System

    NARCIS (Netherlands)

    van Renesse, R.; van Staveren, H.; Tanenbaum, A.S.

    1989-01-01

    Amoeba is a capability‐based distributed operating system designed for high‐performance interactions between clients and servers using the well‐known RPC model. The paper starts out by describing the architecture of the Amoeba system, which is typified by specialized components such as workstations,

  12. Hamiltonian and Variational Linear Distributed Systems

    NARCIS (Netherlands)

    Rapisarda, P.; Trentelman, H.L.

    2002-01-01

    We use the formalism of bilinear- and quadratic differential forms in order to study Hamiltonian and variational linear distributed systems. It was shown that a system described by ordinary linear constant-coefficient differential equations is Hamiltonian if and only if it is variational. In this

  13. A Case Study on Distributed Antenna Systems

    DEFF Research Database (Denmark)

    Sørensen, Troels Bundgaard

    2007-01-01

    Passive distributed antenna systems (DASs) consisting of distributed feeder lines or single point antennas are now often installed in large office buildings where they provide efficient coverage throughout the building. More sophisticated DASs with intelligent reuse and the ability to adapt...... is described in terms of algorithms for power allocation and access port assignment, as well as algorithms for (dynamic) channel assignment. After an outline of simulation assumptions, system capacity comparisons are given between the adaptive DAS and a system with fixed channel and access port assignment...

  14. Distributed expert systems for nuclear reactor control

    International Nuclear Information System (INIS)

    Otaduy, P.J.

    1992-01-01

    A network of distributed expert systems is the heart of a prototype supervisory control architecture developed at the Oak Ridge National Laboratory (ORNL) for an advanced multimodular reactor. Eight expert systems encode knowledge on signal acquisition, diagnostics, safeguards, and control strategies in a hybrid rule-based, multiprocessing and object-oriented distributed computing environment. An interactive simulation of a power block consisting of three reactors and one turbine provides a realistic, testbed for performance analysis of the integrated control system in real-time. Implementation details and representative reactor transients are discussed

  15. Database retrieval systems for nuclear and astronomical data

    International Nuclear Information System (INIS)

    Suda, Takuma; Korennov, Sergei; Otuka, Naohiko; Yamada, Shimako; Katsuta, Yutaka; Ohnishi, Akira; Kato, Kiyoshi; Fujimoto, Masayuki Y.

    2006-01-01

    Data retrieval and plot systems of nuclear and astronomical data are constructed on a common platform. Web-based systems will soon be opened to the users of both fields of nuclear physics and astronomy. (author)

  16. An Evaluation of a Gateway System for Automated Online Database Selection.

    Science.gov (United States)

    Hu, Chengren

    This paper describes a study that was undertaken at the University of Illinois at Urbana-Champaign to compare the databases selected by 75 inexperienced student online searchers aided by an existing gateway system--INFOMASTER, a version of EASYNET--with databases selected manually by four experienced searchers who were reference librarians from…

  17. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  18. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  19. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  20. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  1. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    Science.gov (United States)

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  3. Distributed Monitoring System Based on ICINGA

    CERN Multimedia

    Haen, C; Neufeld, N

    2011-01-01

    The LHCb online system relies on a large and heterogeneous I.T. infrastructure : it comprises more than 2000 servers and embedded systems and more than 200 network devices. While for the control and monitoring of detectors, PLCs, and readout boards an industry standard SCADA system PVSSII has been put in production, we use a low level monitoring system to monitor the control infrastructure itself. While our previous system was based on a single central NAGIOS server, our current system uses a distributed ICINGA infrastructure.

  4. The "Family Tree" of Air Distribution Systems

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    2011-01-01

    that all the known types of air distribution systems are interconnected in a “family tree”. The influence of supplied momentum flow versus buoyancy forces is discussed, and geometries for high ventilation effectiveness are indicated as well as geometries for fully mixed flow. The paper will also show...... conditions which are not used for air distribution in general. A number of experiments with different air distribution systems are addressed, and they illustrate the behaviour at the different conditions discussed in the paper.......In this paper all total volume air distribution principles are addressed based on discussions of air flow pattern in a room with heat sources giving a cooling load. The supply and exhaust air openings are considered to have different locations and sizes in the room, and it is possible to show...

  5. Advanced operating technique using the VR database system

    International Nuclear Information System (INIS)

    Lee, Il-Suk; Yoon, Sang-Hyuk; Suh, Kune Y.

    2003-01-01

    For the timely and competitive response to rapidly changing energy environment in the twenty-first century, there is a growing need to build the advanced nuclear power plants in the unlimited workspace of virtual reality (VR) prior to commissioning. One can then realistically evaluate their construction time and cost per varying methods and options available from the leading-edge technology. In particular, a great deal of efforts have yet to be made for time- and cost-dependent plant simulation and dynamically coupled database construction in the VR space. The present work is being proposed in the three-dimensional space and time plus cost coordinates, i.e. four plus dimensional (4 + D) coordinates. The 4 + D VR technology TM will help the preliminary VR simulation capability for the plants will supply the vital information not only for the actual design and construction of the engineered structures but also for the on-line design modification. Quite a few companies and research institutions have supplied various information services to the nuclear market. A great deal of the information exists in the form of reports, articles, books, which are just kind of simple texts and graphic images. But if very large and important information transfer methods are developed for the nuclear plants by means of the 4 + D technology database, they will tend to greatly benefit the designers, manufacturers, users and even the public. Moreover, one can understand clearly the total structure of the nuclear plants if the 4 + D VR technology TM database operates together with the transient analysis simulator. This technique should be available for public information about the nuclear industry as well as nuclear plant structure and components. By using the 4 + D VR technology TM one can supply the information to users which couldn't have been expressed by the existing technology. Users can not only spin or observe closely the structural elements by simple mouse control, but also know

  6. Synchronization in Quantum Key Distribution Systems

    Directory of Open Access Journals (Sweden)

    Anton Pljonkin

    2017-10-01

    Full Text Available In the description of quantum key distribution systems, much attention is paid to the operation of quantum cryptography protocols. The main problem is the insufficient study of the synchronization process of quantum key distribution systems. This paper contains a general description of quantum cryptography principles. A two-line fiber-optic quantum key distribution system with phase coding of photon states in transceiver and coding station synchronization mode was examined. A quantum key distribution system was built on the basis of the scheme with automatic compensation of polarization mode distortions. Single-photon avalanche diodes were used as optical radiation detecting devices. It was estimated how the parameters used in quantum key distribution systems of optical detectors affect the detection of the time frame with attenuated optical pulse in synchronization mode with respect to its probabilistic and time-domain characteristics. A design method was given for the process that detects the time frame that includes an optical pulse during synchronization. This paper describes the main quantum communication channel attack methods by removing a portion of optical emission. This paper describes the developed synchronization algorithm that takes into account the time required to restore the photodetector’s operation state after the photon has been registered during synchronization. The computer simulation results of the developed synchronization algorithm were analyzed. The efficiency of the developed algorithm with respect to synchronization process protection from unauthorized gathering of optical emission is demonstrated herein.

  7. Driven optomechanical systems for mechanical entanglement distribution

    Science.gov (United States)

    Paternostro, Mauro; Mazzola, Laura; Li, Jie

    2012-08-01

    We consider the distribution of entanglement from a multimode optical driving source to a network of remote and independent optomechanical systems. By focusing on the tripartite case, we analyse the effects that the features of the optical input states have on the degree and sharing structure of the distributed, fully mechanical, entanglement. This study, which is conducted looking at the mechanical steady state, highlights the structure of the entanglement distributed among the nodes and determines the relative efficiency between bipartite and tripartite entanglement transfer. We discuss a few open points, some of which are directed towards the bypassing of such limitations.

  8. Control of distributed systems : tutorial and overview

    Czech Academy of Sciences Publication Activity Database

    van Schuppen, J. H.; Boutin, O.; Kempker, P.L.; Komenda, Jan; Masopust, Tomáš; Pambakian, N.; Ran, A.C.M.

    2011-01-01

    Roč. 17, 5-6 (2011), s. 579-602 ISSN 0947-3580 R&D Projects: GA ČR(CZ) GAP103/11/0517; GA ČR(CZ) GPP202/11/P028 Institutional research plan: CEZ:AV0Z10190503 Keywords : distributed system * coordination control * hierarchical control * distributed control * distributed control with communication Subject RIV: BA - General Mathematics Impact factor: 0.817, year: 2011 http://ejc.revuesonline.com/article.jsp?articleId=16873

  9. Smart travel guide: from internet image database to intelligent system

    Science.gov (United States)

    Chareyron, Ga"l.; Da Rugna, Jérome; Cousin, Saskia

    2011-02-01

    To help the tourist to discover a city, a region or a park, many options are provided by public tourism travel centers, by free online guides or by dedicated book guides. Nonetheless, these guides provide only mainstream information which are not conform to a particular tourist behavior. On the other hand, we may find several online image databases allowing users to upload their images and to localize each image on a map. These websites are representative of tourism practices and constitute a proxy to analyze tourism flows. Then, this work intends to answer this question: knowing what I have visited and what other people have visited, where should I go now? This process needs to profile users, sites and photos. our paper presents the acquired data and relationship between photographers, sites and photos and introduces the model designed to correctly estimate the site interest of each tourism point. The third part shows an application of our schema: a smart travel guide on geolocated mobile devices. This android application is a travel guide truly matching the user wishes.

  10. 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015)

    CERN Document Server

    Nguyen, Ngoc; Batubara, John; New Trends in Intelligent Information and Database Systems

    2015-01-01

    Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, secu...

  11. Application of the British Food Standards Agency nutrient profiling system in a French food composition database.

    Science.gov (United States)

    Julia, Chantal; Kesse-Guyot, Emmanuelle; Touvier, Mathilde; Méjean, Caroline; Fezeu, Léopold; Hercberg, Serge

    2014-11-28

    Nutrient profiling systems are powerful tools for public health initiatives, as they aim at categorising foods according to their nutritional quality. The British Food Standards Agency (FSA) nutrient profiling system (FSA score) has been validated in a British food database, but the application of the model in other contexts has not yet been evaluated. The objective of the present study was to assess the application of the British FSA score in a French food composition database. Foods from the French NutriNet-Santé study food composition table were categorised according to their FSA score using the Office of Communication (OfCom) cut-off value ('healthier' ≤ 4 for foods and ≤ 1 for beverages; 'less healthy' >4 for foods and >1 for beverages) and distribution cut-offs (quintiles for foods, quartiles for beverages). Foods were also categorised according to the food groups used for the French Programme National Nutrition Santé (PNNS) recommendations. Foods were weighted according to their relative consumption in a sample drawn from the NutriNet-Santé study (n 4225), representative of the French population. Classification of foods according to the OfCom cut-offs was consistent with food groups described in the PNNS: 97·8 % of fruit and vegetables, 90·4 % of cereals and potatoes and only 3·8 % of sugary snacks were considered as 'healthier'. Moreover, variability in the FSA score allowed for a discrimination between subcategories in the same food group, confirming the possibility of using the FSA score as a multiple category system, for example as a basis for front-of-pack nutrition labelling. Application of the FSA score in the French context would adequately complement current public health recommendations.

  12. A DISTRIBUTED RING ALGORITHM FOR COORDINATOR ELECTION IN DISTRIBUTED SYSTEMS

    Directory of Open Access Journals (Sweden)

    Shaik Naseera

    2016-09-01

    Full Text Available In distributed systems, nodes are connected at different geographical locations. As a part of effective resource utilization, the data and resources are shared among these nodes. A leader or pioneer is necessary to take care of this resource sharing process by eliminating conflicting among the nodes. The shared resources are to be accessed in a fair and optimal manner among all the nodes in the network. This makes the importance of electing a leader which can coordinate with all the nodes and make fair use of resources among the nodes. As nodes are distributed in different geographical locations and factors influencing its operation make it inevitable that a leader may go down temporarily or permanently. In such case a new leader has to be elected for coordination. The time taken to elect a new leader is one of the crucial factors in improving the performance of the system. In this paper, we propose a new approach for leader election to optimize the time taken for the nodes to elect the leader.

  13. A Distributed Intelligent System for Emergency Convoy

    Directory of Open Access Journals (Sweden)

    Mohammed Benalla

    2016-09-01

    Full Text Available The general problem that guides this research is the ability to design a distributed intelligent system for guiding the emergency convoys; a solution that will be based on a group of agents and on the analysis of traffic in order to generate collective functional response. It fits into the broader issue of Distributed Artificial System (DAI, which is to operate a cooperatively computer agent into multi-agents system (MAS. This article describes conceptually two fundamental questions of emergency convoys. The first question is dedicated to find a response to the traffic situation (i.e. fluid way, while the second is devoted to the convoy orientation; while putting the point on the distributed and cooperative resolution for the general problem.

  14. Survey of standards applicable to a database management system

    Science.gov (United States)

    Urena, J. L.

    1981-01-01

    Industry, government, and NASA standards, and the status of standardization activities of standards setting organizations applicable to the design, implementation and operation of a data base management system for space related applications are identified. The applicability of the standards to a general purpose, multimission data base management system is addressed.

  15. Preliminary investigations on TINI based distributed instrumentation systems

    International Nuclear Information System (INIS)

    Bezboruah, T.; Kalita, M.

    2006-04-01

    A prototype web enabled distributed instrumentation system is being proposed in the Department of Electronics Science, Gauhati University, Assam, India. The distributed instrumentation system contains sensors, legacy hardware, TCP/IP protocol converter, TCP/IP network Ethernet, Database Server, Web/Application Server and Client PCs. As part of the proposed work, Tiny Internet Interface (TINI, TBM390: Dallas Semiconductor) has been deployed as TCP/IP stack, and java programming language as software tools. A feature supported by Java, that is particularly relevant to the distributed system is its applet. An applet is a java class that can be downloaded from the web server and can be run in a context application such as web browser or an applet viewer. TINI has been installed as TCP/IP stack, as it is the best suited embedded system with java programming language and it has been uniquely designed for communicating over One Wire Devices (OWD) over network. Here we will discuss the hardware and software aspects of TINI with OWD for the present system. (author)

  16. Cardea: Dynamic Access Control in Distributed Systems

    Science.gov (United States)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  17. Estimating species diversity and distribution in the era of Big Data: to what extent can we trust public databases?

    Science.gov (United States)

    Maldonado, Carla; Molina, Carlos I.; Zizka, Alexander; Persson, Claes; Taylor, Charlotte M.; Albán, Joaquina; Chilquillo, Eder; Antonelli, Alexandre

    2015-01-01

    Abstract Aim Massive digitalization of natural history collections is now leading to a steep accumulation of publicly available species distribution data. However, taxonomic errors and geographical uncertainty of species occurrence records are now acknowledged by the scientific community – putting into question to what extent such data can be used to unveil correct patterns of biodiversity and distribution. We explore this question through quantitative and qualitative analyses of uncleaned versus manually verified datasets of species distribution records across different spatial scales. Location The American tropics. Methods As test case we used the plant tribe Cinchoneae (Rubiaceae). We compiled four datasets of species occurrences: one created manually and verified through classical taxonomic work, and the rest derived from GBIF under different cleaning and filling schemes. We used new bioinformatic tools to code species into grids, ecoregions, and biomes following WWF's classification. We analysed species richness and altitudinal ranges of the species. Results Altitudinal ranges for species and genera were correctly inferred even without manual data cleaning and filling. However, erroneous records affected spatial patterns of species richness. They led to an overestimation of species richness in certain areas outside the centres of diversity in the clade. The location of many of these areas comprised the geographical midpoint of countries and political subdivisions, assigned long after the specimens had been collected. Main conclusion Open databases and integrative bioinformatic tools allow a rapid approximation of large‐scale patterns of biodiversity across space and altitudinal ranges. We found that geographic inaccuracy affects diversity patterns more than taxonomic uncertainties, often leading to false positives, i.e. overestimating species richness in relatively species poor regions. Public databases for species distribution are valuable and should be

  18. VIS - A database on the distribution of fishes in inland and estuarine waters in Flanders, Belgium.

    Science.gov (United States)

    Brosens, Dimitri; Breine, Jan; Van Thuyne, Gerlinde; Belpaire, Claude; Desmet, Peter; Verreycken, Hugo

    2015-01-01

    The Research Institute for Nature and Forest (INBO) has been performing standardized fish stock assessments in Flanders, Belgium. This Flemish Fish Monitoring Network aims to assess fish populations in public waters at regular time intervals in both inland waters and estuaries. This monitoring was set up in support of the Water Framework Directive, the Habitat Directive, the Eel Regulation, the Red List of fishes, fish stock management, biodiversity research, and to assess the colonization and spreading of non-native fish species. The collected data are consolidated in the Fish Information System or VIS. From VIS, the occurrence data are now published at the INBO IPT as two datasets: 'VIS - Fishes in inland waters in Flanders, Belgium' and 'VIS - Fishes in estuarine waters in Flanders, Belgium'. Together these datasets represent a complete overview of the distribution and abundance of fish species pertaining in Flanders from late 1992 to the end of 2012. This data paper discusses both datasets together, as both have a similar methodology and structure. The inland waters dataset contains over 350,000 fish observations, sampled between 1992 and 2012 from over 2,000 locations in inland rivers, streams, canals, and enclosed waters in Flanders. The dataset includes 64 fish species, as well as a number of non-target species (mainly crustaceans). The estuarine waters dataset contains over 44,000 fish observations, sampled between 1995 and 2012 from almost 50 locations in the estuaries of the rivers Yser and Scheldt ("Zeeschelde"), including two sampling sites in the Netherlands. The dataset includes 69 fish species and a number of non-target crustacean species. To foster broad and collaborative use, the data are dedicated to the public domain under a Creative Commons Zero waiver and reference the INBO norms for data use.

  19. Converters for Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Yang, Yongheng

    2015-01-01

    Power electronics technology has become the enabling technology for the integration of distributed power generation systems (DPGS) such as offshore wind turbine power systems and commercial photovoltaic power plants. Depending on the applications, a vast array of DPGS-based power converter...... topologies has been developed and more are coming into the market in order to achieve an efficient and reliable power conversion from the renewables. In addition, stringent demands from both the distribution system operators and the consumers have been imposed on the renewable-based DPGS. This article...... presents an overview of the power converters for the DPGS, mainly based on wind turbine systems and photovoltaic systems, covering a wide range of applications. Moreover, the modulation schemes and interfacing power filters for the power converters are also exemplified. Finally, the general control...

  20. Self Configurable Intelligent Distributed Antenna System

    DEFF Research Database (Denmark)

    Kumar, Ambuj; Mihovska, Albena Dimitrova; Prasad, Ramjee

    2016-01-01

    to their respective Base Stations (BS). Moreover, in earlier generations of MCC, antennas were deemed collocated with their respective BSs. Later, the concepts like Distributed Antenna Systems (DAS) and Cloud RAN (C-RAN) made it possible to place these antennas distant from their respective BSs. However, being mapped...... with their respective base stations, spectrum pooling and management at antenna end is not efficient. The situation worsens in Heterogeneous and Dense-net conditions in an Area of Interest (AoI). In this paper, we propose a DAS based intelligent architecture referred to as Self Configurable Intelligent Distributed...... Antenna System (SCIDAS) that can simultaneously accommodate multilayer communication environment over a common BS....

  1. DC Home Appliances for DC Distribution System

    Directory of Open Access Journals (Sweden)

    MUHAMMAD KAMRAN

    2017-10-01

    Full Text Available This paper strengthens the idea of DC distribution system for DC microgrid consisting of a building of 50 apartments. Since the war of currents AC system has been dominant because of the paucity of research in the protection of the DC system. Now with the advance research in power electronics material and components, generation of electricity is inherently DC as by solar PV, fuel cell and thermoelectric generator that eliminates the rectification process. Transformers are replaced by the power electronics buck-boost converters. DC circuit breakers have solved the protection problems for both DC transmission and distribution system. In this paper 308V DC microgrid is proposed and home appliances (DC internal are modified to operate on 48V DC from DC distribution line. Instead of using universal and induction motors in rotary appliances, BLDC (Brushless DC motors are proposed that are highly efficient with minimum electro-mechanical and no commutation losses. Proposed DC system reduces the power conversion stages, hence diminishes the associated power losses and standby losses that boost the overall system efficiency. So in view of all this a conventional AC system can be replaced by a DC system that has many advantages by cost as well as by performance

  2. Semantic-Based Concurrency Control for Object-Oriented Database Systems Supporting Real-Time Applications

    National Research Council Canada - National Science Library

    Lee, Juhnyoung; Son, Sang H

    1994-01-01

    .... This paper investigates major issues in designing semantic-based concurrency control for object-oriented database systems supporting real-time applications, and it describes approaches to solving...

  3. Development of bilateral data transferability in the Virginia Department of Transportation's Geotechnical Database Management System Framework.

    Science.gov (United States)

    2006-01-01

    An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was designed, developed, and implemented at the Virginia Department of Transportation (VDOT) in 2002 to retrieve, manage, archive, and analyze geotechnical da...

  4. Subsurface interpretation based on geophysical data set using geothermal database system `GEOBASE`. 2; Chinetsu database system `GEOBASE` wo riyoshita Kakkonda chinetsu chiiki no chika kozo kaiseki. 2

    Energy Technology Data Exchange (ETDEWEB)

    Osato, K.; Sato, T.; Miura, Y.; Yamane, K. [GERD Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Doi, N. [Japan Metals and Chemicals Co. Ltd., Tokyo (Japan); Uchida, T. [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)

    1997-05-27

    Five cross sections were applied as a result of MT method investigations in addition to the results of conventional analyses at the Kakkonda geothermal area; three-dimensional resistivity distribution was made into a database by using the Kriging method which makes a matching with anisotropy of seismic center distribution in micro-earthquakes; and the database was compared with the data derived from surveys on the pilot survey well WD-1a and the side-truck well WD-1b thereof. As a result, it was found that the well WD 1b which encountered a water loss zone had the water loss zone exist in a region with relatively lower resistivity than in the well WD-1a which did not encounter a water loss zone. The region in which the water loss zone was encountered existed in a very steep slope region going from the high resistivity region in the west side toward the low resistivity region in the east side. This fact suggests a possibility that fractures have developed in this region with sharp slope in the resistivity in this area. Adding three-dimensional complementary function to the GEOBASE database by using a simple Kriging allowed the direction of anisotropy in spatial data to be freely and quickly decided. It was learned that this capability exhibits strong power in a mapping work in structures where such anisotropy as a geothermal zone is highly dominant. 5 refs., 8 figs., 2 tabs.

  5. Development of the severe accident risk information database management system SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies.

  6. Acceptance test procedure for the master equipment list (MEL)database system -- phase I

    International Nuclear Information System (INIS)

    Jech, J.B.

    1997-01-01

    The Waste Remediation System/.../Facilities Configuration Management Integration group has requested development of a system to help resolve many of the difficulties associated with management of master equipment list information. This project has been identified as Master Equipment List (MEL) database system. Further definition is contained in the system requirements specification (SRS), reference 7

  7. Acceptance test procedure for the master equipment list (MEL)database system -- phase I

    Energy Technology Data Exchange (ETDEWEB)

    Jech, J.B.

    1997-04-10

    The Waste Remediation System/.../Facilities Configuration Management Integration group has requested development of a system to help resolve many of the difficulties associated with management of master equipment list information. This project has been identified as Master Equipment List (MEL) database system. Further definition is contained in the system requirements specification (SRS), reference 7.

  8. Electric distribution systems and embedded generation capacity

    International Nuclear Information System (INIS)

    Calderaro, V.; Galdi, V.; Piccolo, A.; Siano, P.

    2006-01-01

    The main policy issues of European States are sustainable energy supply promotion and liberalization of energy markets, which introduced market competition in electricity production and created support mechanisms to encourage renewable electricity production and consumption. As a result of liberalization, any generator, including small-scale and renewable energy based units, can sell electricity on the free market. In order to meet future sustainability targets, connection of a higher number of Distributed Generation (DG) units to the electrical power system is expected, requiring changes in the design and operation of distribution electricity systems, as well as changes in electricity network regulation. In order to assist distribution system operators in planning and managing DG connections and in maximizing DG penetration and renewable sources exploitation, this paper proposed a reconfiguration methodology based on a Genetic Algorithm (GA), that was tested on a 70-bus system with DG units. The simulation results confirmed that the methodology represents a suitable tool for distribution system operators when dealing with DG capacity expansion and power loss issues, providing information regarding the potential penetration network-wide and allowing maximum exploitation of renewable generation. 35 refs., 4 tabs., 6 figs

  9. Optimizing and Enhancing Parallel Multi Storage Backup Compression for real Time Database Systems

    OpenAIRE

    Dr.T.Ravichandran; M. Muthukumar

    2012-01-01

    One of the big challenges in the world was the amount of data being stored, especially in Data Warehouses. Data stored in databases keep growing as a result of businesses requirements for more information. A big portion of the cost of keeping large amounts of data is in the cost of disk systems, and the resources utilized in managing the data. Backup Compression field in the database systems has tremendously revolutionized in the past few decades. Most existing work presented attribute-level ...

  10. Comparison of scientific and administrative database management systems

    Science.gov (United States)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  11. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    Science.gov (United States)

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  12. Wireless distributed functional electrical stimulation system.

    Science.gov (United States)

    Jovičić, Nenad S; Saranovac, Lazar V; Popović, Dejan B

    2012-08-09

    The control of movement in humans is hierarchical and distributed and uses feedback. An assistive system could be best integrated into the therapy of a human with a central nervous system lesion if the system is controlled in a similar manner. Here, we present a novel wireless architecture and routing protocol for a distributed functional electrical stimulation system that enables control of movement. The new system comprises a set of miniature battery-powered devices with stimulating and sensing functionality mounted on the body of the subject. The devices communicate wirelessly with one coordinator device, which is connected to a host computer. The control algorithm runs on the computer in open- or closed-loop form. A prototype of the system was designed using commercial, off-the-shelf components. The propagation characteristics of electromagnetic waves and the distributed nature of the system were considered during the development of a two-hop routing protocol, which was implemented in the prototype's software. The outcomes of this research include a novel system architecture and routing protocol and a functional prototype based on commercial, off-the-shelf components. A proof-of-concept study was performed on a hemiplegic subject with paresis of the right arm. The subject was tasked with generating a fully functional palmar grasp (closing of the fingers). One node was used to provide this movement, while a second node controlled the activation of extensor muscles to eliminate undesired wrist flexion. The system was tested with the open- and closed-loop control algorithms. The system fulfilled technical and application requirements. The novel communication protocol enabled reliable real-time use of the system in both closed- and open-loop forms. The testing on a patient showed that the multi-node system could operate effectively to generate functional movement.

  13. Wireless distributed functional electrical stimulation system

    Directory of Open Access Journals (Sweden)

    Jovičić Nenad S

    2012-08-01

    Full Text Available Abstract Background The control of movement in humans is hierarchical and distributed and uses feedback. An assistive system could be best integrated into the therapy of a human with a central nervous system lesion if the system is controlled in a similar manner. Here, we present a novel wireless architecture and routing protocol for a distributed functional electrical stimulation system that enables control of movement. Methods The new system comprises a set of miniature battery-powered devices with stimulating and sensing functionality mounted on the body of the subject. The devices communicate wirelessly with one coordinator device, which is connected to a host computer. The control algorithm runs on the computer in open- or closed-loop form. A prototype of the system was designed using commercial, off-the-shelf components. The propagation characteristics of electromagnetic waves and the distributed nature of the system were considered during the development of a two-hop routing protocol, which was implemented in the prototype’s software. Results The outcomes of this research include a novel system architecture and routing protocol and a functional prototype based on commercial, off-the-shelf components. A proof-of-concept study was performed on a hemiplegic subject with paresis of the right arm. The subject was tasked with generating a fully functional palmar grasp (closing of the fingers. One node was used to provide this movement, while a second node controlled the activation of extensor muscles to eliminate undesired wrist flexion. The system was tested with the open- and closed-loop control algorithms. Conclusions The system fulfilled technical and application requirements. The novel communication protocol enabled reliable real-time use of the system in both closed- and open-loop forms. The testing on a patient showed that the multi-node system could operate effectively to generate functional movement.

  14. The database management system: A topic and a tool

    Science.gov (United States)

    Plummer, O. R.

    1984-01-01

    Data structures and data base management systems are common tools employed to deal with the administrative information of a university. An understanding of these topics is needed by a much wider audience, ranging from those interested in computer aided design and manufacturing to those using microcomputers. These tools are becoming increasingly valuable to academic programs as they develop comprehensive computer support systems. The wide use of these tools relies upon the relational data model as a foundation. Experience with the use of the IPAD RIM5.0 program is described.

  15. Light distribution system comprising spectral conversion means

    DEFF Research Database (Denmark)

    2012-01-01

    System (200, 300) for the distribution of white light, having a supply side (201, 301, 401) and a delivery side (202, 302, 402), the system being configured for guiding light with a multitude of visible wavelengths in a propagation direction P from the supply side to the distribution side...... fibre being operationally connected to the spectral conversion fibre having a length extending from an input end (221, 321)to an output end (222, 322), the spectral conversion fibre comprising a photoluminescent agent (511, 611, 711) for converting light of a first wavelength to light of a second......, longer wavelength,a spectral conversion characteristics of the spectral conversion fibre being essentially determined by the spectral absorption and emission properties of the photoluminescent agent, the amount of photo- luminescent agent,and the distribution of the photoluminescent agent in the spectral...

  16. Distributed Monitoring System Based on ICINGA

    CERN Document Server

    Haen, C; Neufeld, N

    2011-01-01

    The LHCb online system relies on a large and heterogeneous IT infrastructure: it comprises more than 2000 servers and embedded systems and more than 200 network devices. Many of these equipments are critical in order to run the experiment, and it is important to have a monitoring solution performant enough so that the experts can diagnose and act quickly. While our previous system was based on a central Nagios server, our current system uses a distributed Icinga infrastructure. The LHCb installation schema will be presented here, as well some performance comparisons and custom tools.

  17. Data-based control trajectory planning for nonlinear systems

    International Nuclear Information System (INIS)

    Rhodes, C.; Morari, M.; Tsimring, L.S.; Rulkov, N.F.

    1997-01-01

    An open-loop trajectory planning algorithm is presented for computing an input sequence that drives an input-output system such that a reference trajectory is tracked. The algorithm utilizes only input-output data from the system to determine the proper control sequence, and does not require a mathematical or identified description of the system dynamics. From the input-output data, the controlled input trajectory is calculated in a open-quotes one-step-aheadclose quotes fashion using local modeling. Since the algorithm is calculated in this fashion, the output trajectories to be tracked can be nonperiodic. The algorithm is applied to a driven Lorenz system, and an experimental electrical circuit and the results are analyzed. Issues of stability associated with the implementation of this open-loop scheme are also examined using an analytic example of a driven Hacute enon map, problems associated with inverse controllers are illustrated, and solutions to these problems are proposed. copyright 1997 The American Physical Society

  18. Planning and Optimization Methods for Active Distribution Systems

    DEFF Research Database (Denmark)

    Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte

    distribution planning. Active distribution networks (ADNs) have systems in place to control a combination of distributed energy resources (DERs), defined as generators, loads and storage. With these systems in place, the AND becomes an Active Distribution System (ADS). Distribution system operators (DSOs) have...

  19. Access to Emissions Distributions and Related Ancillary Data through the ECCAD database

    Science.gov (United States)

    Darras, Sabine; Granier, Claire; Liousse, Catherine; De Graaf, Erica; Enriquez, Edgar; Boulanger, Damien; Brissebrat, Guillaume

    2017-04-01

    The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.

  20. Reliability Issues in Distributed Operating Systems

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Renesse, R.

    1987-01-01

    The authors examine the various kinds of distributed systems and discuss some of the reliability issues involved. They first concentrate on the causes of unreliability, illustrating these with some general solutions and examples. Among the issues treated are interprocess communication, machine