WorldWideScience

Sample records for distributed database access

  1. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  2. Scalable Database Access Technologies for ATLAS Distributed Computing

    CERN Document Server

    Vaniachine, A

    2009-01-01

    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We pre...

  3. Transparent image access in a distributed picture archiving and communications system: The master database broker

    OpenAIRE

    Cox, R D; Henri, C. J.; Rubin, R. K.

    1999-01-01

    A distributed design is the most cost-effective system for small- to medium-scale picture archiving and communications systems (PACS) implementations. However, the design presents an interesting challenge to developers and implementers: to make stored image data, distributed throughout the PACS network, appear to be centralized with a single access point for users. A key component for the distributed system is a central or master database, containing all the studies that have been scanned int...

  4. Fundamental Research of Distributed Database

    Directory of Open Access Journals (Sweden)

    Swati Gupta

    2011-08-01

    Full Text Available The purpose of this paper is to present an introduction toDistributed Databases which are becoming very popularnow a days. Today’s business environment has anincreasing need for distributed database and Client/server applications as the desire for reliable, scalable and accessible information is Steadily rising. Distributed database systems provide an improvement on communication and data processing due to its datadistribution throughout different network sites. Not Only isdata access faster, but a single-point of failure is less likelyto occur, and it provides local control of data for users.

  5. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  6. Role-Based Access Control for Loosely Coupled Distributed Database Management Systems

    Science.gov (United States)

    2002-03-01

    UDERSTANDING THE RBAC POLICY OF THE APPLICATION......41 C. MAPPING THE APPLICATION POLICY ...............................................42 D. STORAGE OF THE...functionality and implementation options that the Hypersonic database provides. B. UDERSTANDING THE RBAC POLICY OF THE APPLICATION To fully

  7. Physical Access Control Database -

    Data.gov (United States)

    Department of Transportation — This data set contains the personnel access card data (photo, name, activation/expiration dates, card number, and access level) as well as data about turnstiles and...

  8. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  9. Transparent access to relational, autonomous and distributed databases using semantic web and service oriented technologies

    OpenAIRE

    Caires, Bruno José de Sales Caires

    2007-01-01

    With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications...

  10. Security Issues in Distributed Database System Model

    OpenAIRE

    MD.TABREZ QUASIM

    2013-01-01

    This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, ...

  11. Parallel and Distributed Databases

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    2009-01-01

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive appli

  12. Security Issues in Distributed Database System Model

    Directory of Open Access Journals (Sweden)

    MD.TABREZ QUASIM

    2013-12-01

    Full Text Available This paper reviews the most common as well as emerging security mechanism used in distributed database system. As distributed database became more popular, the need for improvement in distributed database management system become even more important. The most important issue is security that may arise and possibly compromise the access control and the integrity of the system. In this paper, we propose some solution for some security aspects such as multi-level access control, confidentiality, reliability, integrity and recovery that pertain to a distributed database system.

  13. Internet Based Open Access Crystallographic Databases

    Science.gov (United States)

    Upreti, Girish; Seipel, Bjoern; Harvey, Morgan; Garrick, Will; Moeck, Peter

    2006-05-01

    Two freely accessible crystallographic databases are discussed: the Crystallographic Open Database (COD, http://crystallography.net) which contains over 37,000 crystal structures, and the Nano-Crystallography Database (NCD, http://nanocrystallography.research.pdx.edu) which we recently started to support image-based nanocrystallography and (nano) materials science education. Both databases collect crystallographic relevant information in a standardized format; the Crystallographic Information File (CIF). CIF is the standard file format adopted by the International Union of Crystallography (http://iucr.org) for the archiving and distribution of crystallographic information. A subset of the COD, the Predicted Crystallographic Online Database, allows for 3D structural displays of structural polyhedra and wireframes of approximately 2,600 entries. Since electron microscopist are interested in simple, yet technologically important materials, the crystallographic information for those materials will be included in our database. At our NCD site, entries in the COD and the NCD can be visualized in three dimensions (3D) along with (2D) lattice fringe fingerprints plots. The latter supports the identification of unknown nanocrystal phases from high-resolution transmission electron microscopy (HRTEM) images. Morphological crystal information from the database ``Bestimmungstabellen f"ur Kristalle/ ???????????? ??????????,'' (A.K. Boldyrew and W.W. Doliwo-Dobrowolsky, Zentrales Wissenschaftlichers Institute der Geologie und Sch"urfung, Leningrad/ Moscow, 1937/1939) will also be included in the NCD to support image-based nanocrystallography in 3D.

  14. Advanced Technologies for Distributed Database Services Hyperinfrastructure

    Science.gov (United States)

    Vaniachine, Alexandre; Malon, David; Vranicar, Matthew

    HEP collaborations are deploying grid technologies to address petabyte-scale data processing challenges. In addition to file-based event data, HEP data processing requires access to terabytes of non-event data (detector conditions, calibrations, etc.) stored in relational databases. Inadequate for non-event data delivery in these amounts, database access control technologies for grid computing are limited to encrypted message transfers. To overcome these database access limitations one must go beyond the existing grid infrastructure. A proposed hyperinfrastructure of distributed database services implements efficient secure data access methods. We introduce several technologies laying a foundation of a new hyperinfrastructure. We present efficient secure data transfer methods and secure grid query engine technologies federating heterogeneous databases. Lessons learned in a production environment of ATLAS Data Challenges are presented.

  15. Challenges in Database Design with Microsoft Access

    Science.gov (United States)

    Letkowski, Jerzy

    2014-01-01

    Design, development and explorations of databases are popular topics covered in introductory courses taught at business schools. Microsoft Access is the most popular software used in those courses. Despite quite high complexity of Access, it is considered to be one of the most friendly database programs for beginners. A typical Access textbook…

  16. Correlates of Access to Business Research Databases

    Science.gov (United States)

    Gottfried, John C.

    2010-01-01

    This study examines potential correlates of business research database access through academic libraries serving top business programs in the United States. Results indicate that greater access to research databases is related to enrollment in graduate business programs, but not to overall enrollment or status as a public or private institution.…

  17. Efficient Distributed Medium Access

    CERN Document Server

    Shah, Devavrat; Tetali, Prasad

    2011-01-01

    Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supp...

  18. J2ME accessing distributed database based on mobile agent%基于移动agent的J2ME分布式数据库访问

    Institute of Scientific and Technical Information of China (English)

    茹蓓; 肖云鹏

    2011-01-01

    J2ME mobile device, as its limited computational and mnemonic ability, does not support database system actually. At present, most systems use J2ME-J2EE-database solution, but the linear solution does not adapt to characteristic of wireless environment such as highly delay and disconnection frequently. By using and extending agent theory, proposed a new method of four layers C/S J2ME accessing distributed database scheme, and developed a prototype system by open source project named Aglets. The result of evaluation shows that the scheme is practical, speedy and robust.%由于移动手持设备有限的计算和存储能力,J2ME没有提供真正意义上的数据库系统,目前大多数系统采用J2ME-J2EE-数据库系统的解决方案,然而这种线性部署并不适合无线计算环境下频繁断接、高延时等特性.通过借鉴移动agent思想并对其功能进行扩充,提出基于移动agent的J2ME分布式数据库四层C/S访问机制,并在开源Aglets平台基础上开发了一个分布式通信录系统原型,实验评估显示该方案有效地提高了J2ME设备对分布式数据库的访问效率和健壮性.

  19. Accessing and using chemical property databases.

    Science.gov (United States)

    Hastings, Janna; Josephs, Zara; Steinbeck, Christoph

    2012-01-01

    Chemical compounds participate in all the processes of life. Understanding the complex interactions of small molecules such as metabolites and drugs and the biological macromolecules that consume and produce them is key to gaining a wider understanding in a systemic context. Chemical property databases collect information on the biological effects and physicochemical properties of chemical entities. Accessing and using such databases is key to understanding the chemistry of toxic molecules. In this chapter, we present methods to search, understand, download, and manipulate the wealth of information available in public chemical property databases, with particular focus on the database of Chemical Entities of Biological Interest (ChEBI).

  20. Parallel and Distributed Databases: Introduction

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    2009-01-01

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive appli

  1. Village Green Project: Web-accessible Database

    Science.gov (United States)

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  2. Database Security System for Applying Sophisticated Access Control via Database Firewall Server

    OpenAIRE

    Eun-Ae Cho; Chang-Joo Moon; Dae-Ha Park; Kang-Bin Yim

    2014-01-01

    Database security, privacy, access control, database firewall, data break masking Recently, information leakage incidents have occurred due to database security vulnerabilities. The administrators in the traditional database access control methods grant simple permissions to users for accessing database objects. Even though they tried to apply more strict permissions in recent database systems, it was difficult to properly adopt sophisticated access control policies to commercial databases...

  3. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  4. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto;

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip...

  5. Filling in the GAPS: evaluating completeness and coverage of open‐access biodiversity databases in the United States

    National Research Council Canada - National Science Library

    Troia, Matthew J; McManamay, Ryan A

    2016-01-01

    ...‐access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species...

  6. Paper-based mobile access to databases

    OpenAIRE

    Signer, Beat; Norrie, Moira C.; Grossniklaus, Michael; Belotti, Rudi; Decurtins, Corsin; Weibel, Nadir

    2006-01-01

    Our demonstration is a paper-based interactive guide for visitors to the world's largest international arts festival that was developed as part of a project investigating new forms of context-aware information delivery and interaction in mobile environments. Information stored in a database is accessed from a set of interactive paper documents, including a printed festival brochure, a city map and a bookmark. Active areas are defined within the documents and selection of these using a special...

  7. National Radiobiology Archives Distributed Access user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Watson, C.; Smith, S. (Pacific Northwest Lab., Richland, WA (United States)); Prather, J. (Linfield Coll., McMinnville, OR (United States))

    1991-11-01

    This User's Manual describes installation and use of the National Radiobiology Archives (NRA) Distributed Access package. The package consists of a distributed subset of information representative of the NRA databases and database access software which provide an introduction to the scope and style of the NRA Information Systems.

  8. Optimal access to large databases via networks

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.K.; Fellows, R.L.; Phifer, D. Carrick, M.R.; Tarlton, N.

    1997-10-01

    A CRADA with Stephens Engineering was undertaken in order to transfer knowledge and experience about access to information in large text databases, with results of queries and searches provided using the multimedia capabilities of the World Wide Web. Data access is optimized by the use of intelligent agents. Technology Logic Diagram documents published for the DOE facilities in Oak Ridge (K-25, X-10, Y-12) were chosen for this effort because of the large number of technologies identified, described, evaluated, and ranked for possible use in the environmental remediation of these facilities. Fast, convenient access to this information is difficult because of the volume and complexity of the data. WAIS software used to provide full-text, field-based search capability can also be used, through the development of an appropriate hierarchy of menus, to provide tabular summaries of technologies satisfying a wide range of criteria. The menu hierarchy can also be used to regenerate dynamically many of the tables that appeared in the original hardcopy publications, all from a single text database of the technology descriptions. Use of the Web environment permits linking many of the Technology Logic Diagram references to on-line versions of these publications, particularly the DOE Orders and related directives providing the legal requirements that were the basis for undertaking the Technology Logic Diagram studies in the first place.

  9. An open access thyroid ultrasound image database

    Science.gov (United States)

    Pedraza, Lina; Vargas, Carlos; Narváez, Fabián.; Durán, Oscar; Muñoz, Emma; Romero, Eduardo

    2015-01-01

    Computer aided diagnosis systems (CAD) have been developed to assist radiologists in the detection and diagnosis of abnormalities and a large number of pattern recognition techniques have been proposed to obtain a second opinion. Most of these strategies have been evaluated using different datasets making their performance incomparable. In this work, an open access database of thyroid ultrasound images is presented. The dataset consists of a set of B-mode Ultrasound images, including a complete annotation and diagnostic description of suspicious thyroid lesions by expert radiologists. Several types of lesions as thyroiditis, cystic nodules, adenomas and thyroid cancers were included while an accurate lesion delineation is provided in XML format. The diagnostic description of malignant lesions was confirmed by biopsy. The proposed new database is expected to be a resource for the community to assess different CAD systems.

  10. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  11. Distributed Structure-Searchable Toxicity Database Network

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Distributed Structure-Searchable Toxicity (DSSTox) Database Network provides a public forum for search and publishing downloadable, structure-searchable,...

  12. FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE

    Directory of Open Access Journals (Sweden)

    Etienne Decencière

    2014-08-01

    Full Text Available The Messidor database, which contains hundreds of eye fundus images, has been publicly distributed since 2008. It was created by the Messidor project in order to evaluate automatic lesion segmentation and diabetic retinopathy grading methods. Designing, producing and maintaining such a database entails significant costs. By publicly sharing it, one hopes to bring a valuable resource to the public research community. However, the real interest and benefit of the research community is not easy to quantify. We analyse here the feedback on the Messidor database, after more than 6 years of diffusion. This analysis should apply to other similar research databases.

  13. Performance related issues in distributed database systems

    Science.gov (United States)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  14. Human membrane transporter database: a Web-accessible relational database for drug transport studies and pharmacogenomics.

    Science.gov (United States)

    Yan, Q; Sadée, W

    2000-01-01

    The human genome contains numerous genes that encode membrane transporters and related proteins. For drug discovery, development, and targeting, one needs to know which transporters play a role in drug disposition and effects. Moreover, genetic polymorphisms in human membrane transporters may contribute to interindividual differences in the response to drugs. Pharmacogenetics, and, on a genome-wide basis, pharmacogenomics, address the effect of genetic variants on an individual's response to drugs and xenobiotics. However, our knowledge of the relevant transporters is limited at present. To facilitate the study of drug transporters on a broad scale, including the use of microarray technology, we have constructed a human membrane transporter database (HMTD). Even though it is still largely incomplete, the database contains information on more than 250 human membrane transporters, such as sequence, gene family, structure, function, substrate, tissue distribution, and genetic disorders associated with transporter polymorphisms. Readers are invited to submit additional data. Implemented as a relational database, HMTD supports complex biological queries. Accessible through a Web browser user interface via Common Gateway Interface (CGI) and Java Database Connection (JDBC), HMTD also provides useful links and references, allowing interactive searching and downloading of data. Taking advantage of the features of an electronic journal, this paper serves as an interactive tutorial for using the database, which we expect to develop into a research tool.

  15. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... of various data projection techniques is offered with the main stress on applied Principal Component Analysis. For clustering purposes, various Generalized Gaussian Mixture models are presented. Further the aggregated Markov model, which provides the cluster structure via the probabilistic decomposition...... Gaussian Mixture model. Two models for imputation of the missing data, namely the K-nearest neighbor and a Gaussian model are suggested. With the purpose of interpreting a cluster structure two techniques are developed. If cluster labels are available then the cluster understanding via the confusion matrix...

  16. World Ocean Database 2013 (NCEI Accession 0117075)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Ocean Database (WOD) is the World’s largest publicly available uniform format quality controlled ocean profile dataset. Ocean profile data are sets of...

  17. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    White David

    2008-01-01

    Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.

  18. Embedded Systems Programming: Accessing Databases from Esterel

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.

  19. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  20. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  1. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  2. HCUP State Emergency Department Databases (SEDD) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Emergency Department Databases (SEDD) contain the universe of emergency department visits in participating States. Restricted access data files are...

  3. Migration of MS Access Databases to Mendix Platform

    NARCIS (Netherlands)

    Boudale, T.

    2014-01-01

    This thesis is concerned with the migration of Microsoft Access databases to Mendix Platform. We investigate similarities and dierences between the data models of the two systems, discuss issues in regards to data migration and also examine possible options for migrating database queries. A tool was

  4. A database in ACCESS for assessing vaccine serious adverse events

    OpenAIRE

    Thomas RE; Jackson D.

    2015-01-01

    Roger E Thomas,1 Dave Jackson2,3 1Department of Family Medicine, G012 Health Sciences Centre, University of Calgary Medical School, Calgary, AB, Canada; 2Independent Research Consultant, Calgary, AB, Canada; 3Database Consultant, University of Calgary, Calgary, AB, Canada Purpose: To provide a free flexible database for use by any researcher for assessing reports of adverse events after vaccination. Results: A database was developed in Microsoft ACCESS to assess reports of serious adverse ev...

  5. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  6. A database in ACCESS for assessing vaccine serious adverse events

    Directory of Open Access Journals (Sweden)

    Thomas RE

    2015-04-01

    Full Text Available Roger E Thomas,1 Dave Jackson2,3 1Department of Family Medicine, G012 Health Sciences Centre, University of Calgary Medical School, Calgary, AB, Canada; 2Independent Research Consultant, Calgary, AB, Canada; 3Database Consultant, University of Calgary, Calgary, AB, Canada Purpose: To provide a free flexible database for use by any researcher for assessing reports of adverse events after vaccination. Results: A database was developed in Microsoft ACCESS to assess reports of serious adverse events after yellow fever vaccination using Brighton Collaboration criteria. The database is partly automated (if data panels contain identical data fields the data are automatically also entered into those fields. The purpose is to provide the database free for developers to add additional panels to assess other vaccines. Keywords: serious adverse events after vaccination, database, process to assess vaccine-associated events 

  7. The AAS Working Group on Accessibility and Disability (WGAD) Year 1 Highlights and Database Access

    Science.gov (United States)

    Knierman, Karen A.; Diaz Merced, Wanda; Aarnio, Alicia; Garcia, Beatriz; Monkiewicz, Jacqueline A.; Murphy, Nicholas Arnold

    2017-06-01

    The AAS Working Group on Accessibility and Disability (WGAD) was formed in January of 2016 with the express purpose of seeking equity of opportunity and building inclusive practices for disabled astronomers at all educational and career stages. In this presentation, we will provide a summary of current activities, focusing on developing best practices for accessibility with respect to astronomical databases, publications, and meetings. Due to the reliance of space sciences on databases, it is important to have user centered design systems for data retrieval. The cognitive overload that may be experienced by users of current databases may be mitigated by use of multi-modal interfaces such as xSonify. Such interfaces would be in parallel or outside the original database and would not require additional software efforts from the original database. WGAD is partnering with the IAU Commission C1 WG Astronomy for Equity and Inclusion to develop such accessibility tools for databases and methods for user testing. To collect data on astronomical conference and meeting accessibility considerations, WGAD solicited feedback from January AAS attendees via a web form. These data, together with upcoming input from the community and analysis of accessibility documents of similar conferences, will be used to create a meeting accessibility document. Additionally, we will update the progress of journal access guidelines and our social media presence via Twitter. We recommend that astronomical journals form committees to evaluate the accessibility of their publications by performing user-centered usability studies.

  8. Creating user-friendly databases with Microsoft Access.

    Science.gov (United States)

    Schneider, Joanne Kraenzle; Schneider, Joseph F; Lorenz, Rebecca A

    2005-01-01

    Data entry can be tedious and is fraught with potential for errors that affect study findings. Researchers can minimise entry errors and streamline data entry by using some of the popular software packages on the market. Joanne Kraenzle Schneider and colleagues describe one way to create a user-friendly database that minimises entry errors by using Microsoft (MS) Access.

  9. User walkthrough of multimodal access to multidimensional databases

    NARCIS (Netherlands)

    Esch van-Bussemakers, M.P.; Cremers, A.H.M.

    2004-01-01

    This paper describes a user walkthrough that was conducted with an experimental multimodal dialogue system to access a multidimensional music database using a simulated mobile device (including a technically challenging four-PHANToM-setup). The main objectives of the user walkthrough were to assess

  10. Optimizing Database Architecture for the New Bottleneck: Memory Access

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2000-01-01

    textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the

  11. SUBJECTIVE CONTENT ACCESSIBILITY USING DATABASE APPROACH FOR DIGITAL LIBRARY

    Directory of Open Access Journals (Sweden)

    Sachin yele

    2011-03-01

    Full Text Available Today’s digital library is a massive collection of various types and categories of documents. The existing search engines do not provide subjective search from the collection, as no information about context is stored. The existing search engine mostly uses the agent based search then the database based search. The database search is simpler easier but static verses dynamic Web. The work shows how database become dynamic, subjective and search query becomes simpler. The subjective and context based search is necessity of searchingin Digital Library. The user who may be researcher, students, and even common person expect subject or context and need content accessibility precise and subject specific. This paper presents the topic-word specific subjective search using the database approach in digital library, by data mining technique in warehouse.

  12. Database access and problem solving in the basic sciences.

    Science.gov (United States)

    de Bliek, R; Friedman, C P; Wildemuth, B M; Martz, J M; File, D; Twarog, R G; Reich, G M; Hoekstra, L

    1993-01-01

    This study examined the potential contribution that access to a database of biomedical information may offer in support of problem-solving exercises when personal knowledge is inadequate. Thirty-six medical students were assessed over four occasions and three domains in the basic sciences: bacteriology, pharmacology, and toxicology. Each assessment consisted of a two-pass protocol in which students were first assessed for their personal knowledge of a domain with a short-answer problem set. Then, for a sample of problems they had missed, they were asked to use a database, INQUIRER, to respond to questions which they had been unable to address with their personal knowledge. Results indicate that for a domain in which the database is well-integrated in course activities, useful retrieval of information which augmented personal knowledge increased over three assessment occasions, even continuing to increase several months after course exposure and experience with the database. For all domains, even at assessments prior to course exposure, students were able to moderately extend their ability to solve problems through access to the INQUIRER database.

  13. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  14. NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases

    OpenAIRE

    Alberto Anguita; Miguel García-Remesal; Diana de la Iglesia; Victor Maojo

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the c...

  15. Toward an open-access global database for mapping, control, and surveillance of neglected tropical diseases.

    Directory of Open Access Journals (Sweden)

    Eveline Hürlimann

    2011-12-01

    Full Text Available BACKGROUND: After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs. Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. METHODOLOGY: With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and 'grey literature', contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques. The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. PRINCIPAL FINDINGS: At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. CONCLUSIONS: An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment

  16. Organic materials database: An open-access online database for data mining

    Science.gov (United States)

    Geilhufe, R. Matthias; Balatsky, Alexander V.

    2017-01-01

    We present an organic materials database (OMDB) hosting thousands of Kohn-Sham electronic band structures, which is freely accessible online at http://omdb.diracmaterials.org. The OMDB focus lies on electronic structure, density of states and other properties for purely organic and organometallic compounds that are known to date. The electronic band structures are calculated using density functional theory for the crystal structures contained in the Crystallography Open Database. The OMDB web interface allows users to retrieve materials with specified target properties using non-trivial queries about their electronic structure. We illustrate the use of the OMDB and how it can become an organic part of search and prediction of novel functional materials via data mining techniques. As a specific example, we provide data mining results for metals and semiconductors, which are known to be rare in the class of organic materials. PMID:28182744

  17. Organic materials database: An open-access online database for data mining.

    Science.gov (United States)

    Borysov, Stanislav S; Geilhufe, R Matthias; Balatsky, Alexander V

    2017-01-01

    We present an organic materials database (OMDB) hosting thousands of Kohn-Sham electronic band structures, which is freely accessible online at http://omdb.diracmaterials.org. The OMDB focus lies on electronic structure, density of states and other properties for purely organic and organometallic compounds that are known to date. The electronic band structures are calculated using density functional theory for the crystal structures contained in the Crystallography Open Database. The OMDB web interface allows users to retrieve materials with specified target properties using non-trivial queries about their electronic structure. We illustrate the use of the OMDB and how it can become an organic part of search and prediction of novel functional materials via data mining techniques. As a specific example, we provide data mining results for metals and semiconductors, which are known to be rare in the class of organic materials.

  18. A Survey on Distributed Mobile Database and Data Mining

    Science.gov (United States)

    Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.

    2010-11-01

    The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.

  19. Development of database on the distribution coefficient. 2. Preparation of database

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. 'Database on the Distribution Coefficient' was built up from the informations which were obtained by the literature survey in the country for these various items such as value , measuring method and measurement condition of distribution coefficient, in order to select the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was explained about the outline on preparation of this database and was summarized as a use guide book of database. (author)

  20. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  1. Access control mechanisms for distributed healthcare environments.

    Science.gov (United States)

    Sergl-Pommerening, Marita

    2004-01-01

    Today's IT-infrastructure provides more and more possibilities to share electronic patient data across several healthcare organizations and hospital departments. A strong requirement is sufficient data protection and security measures complying with the medical confidentiality and the data protection laws of each state or country like the European directive on data protection or the U.S. HIPAA privacy rule. In essence, the access control mechanisms and authorization structures of information systems must be able to realize the Need-To-Access principle. This principle can be understood as a set of context-sensitive access rules, regarding the patient's path across the organizations. The access control mechanisms of today's health information systems do not sufficiently satisfy this requirement, because information about participation of persons or organizations is not available within each system in a distributed environment. This problem could be solved by appropriate security services. The CORBA healthcare domain standard contains such a service for obtaining authorization decisions and administrating access decision policies (RAD). At the university hospital of Mainz we have developed an access control system (MACS), which includes the main functionality of the RAD specification and the access control logic that is needed for such a service. The basic design principles of our approach are role-based authorization, user rights with static and dynamic authorization data, context rules and the separation of three cooperating servers that provide up-to-date knowledge about users, roles and responsibilities. This paper introduces the design principles and the system design and critically evaluates the concepts based on practical experience.

  2. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    Science.gov (United States)

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but

  3. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    Science.gov (United States)

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted

  4. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a m

  5. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Apers, Peter M.G.; Kersten, Martin L.; Oerlemans, Hans C.M.; Schmidt, J.W.; Ceri, S.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a m

  6. ASSOCIATION RULES IN HORIZONTALLY DISTRIBUTED DATABASES WITH ENHANCED SECURE MINING

    Directory of Open Access Journals (Sweden)

    Sonal Patil

    2015-10-01

    Full Text Available Recent developments in information technology have made possible the collection and analysis of millions of transactions containing personal data. These data include shopping habits, criminal records, medical histories and credit records among others. In the term of distributed database, distributed database is a database in which storage devices are not all attached to a common processing unit such as the CPU controlled by a distributed database management system (together sometimes called a distributed database system. It may be stored in multiple computers located in the same physical location or may be dispersed over a network of interconnected computers. A protocol has been proposed for secure mining of association rules in horizontally distributed databases. This protocol is optimized than the Fast Distributed Mining (FDM algorithm which is an unsecured distributed version of the Apriori algorithm. The main purpose of this protocol is to remove the problem of mining generalized association rules that affects the existing system. This protocol offers more enhanced privacy with respect to previous protocols. In addition it is simpler and is optimized in terms of communication rounds, communication cost and computational cost than other protocols.

  7. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  8. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D.; Blomer, J. [CERN

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  9. Automated Agent Ontology Creation for Distributed Databases

    Science.gov (United States)

    2004-03-01

    Query Agent ..……………………………………………………47 3.3 Database Agent Implementation…………………………………………………..48 3.3.1 Leader Election Procedure...algorithm. Section 3.3.1 details the leader election process, Section 3.3.2 discusses the Jaro method, and Section 3.3.3 details the ontology creation...Receives new number Receives new number from Agent2 from Agent2 Figure 3-2. Agent leader communication 49 3.3.1 Leader Election Procedure Figure

  10. Distributed medium access control in wireless networks

    CERN Document Server

    Wang, Ping

    2013-01-01

    This brief investigates distributed medium access control (MAC) with QoS provisioning for both single- and multi-hop wireless networks including wireless local area networks (WLANs), wireless ad hoc networks, and wireless mesh networks. For WLANs, an efficient MAC scheme and a call admission control algorithm are presented to provide guaranteed QoS for voice traffic and, at the same time, increase the voice capacity significantly compared with the current WLAN standard. In addition, a novel token-based scheduling scheme is proposed to provide great flexibility and facility to the network servi

  11. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  12. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  13. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  14. Vaccine production, distribution, access and uptake

    Science.gov (United States)

    Smith, Jon; Lipsitch, Marc; Almond, Jeffrey W.

    2011-01-01

    Making human vaccines available on a global scale requires the use of complex production methods, meticulous quality control and reliable distribution channels that ensure the products are potent and effective at their point of use. The technologies involved in manufacturing different types of vaccines may strongly influence vaccine cost, ease of industrial scale-up, stability and ultimately world-wide availability. Manufacturing complexity is compounded by the need for different formulations for different countries and age groups. Reliable vaccine production in appropriate quantities and at affordable prices is the cornerstone of developing global vaccination policies. However, ensuring optimal access and uptake also requires strong partnerships between private manufacturers, regulatory authorities and national and international public health services. For vaccines whose supplies are limited, either due to rapidly emerging diseases or longer-term mismatch of supply and demand, prioritizing target groups can increase vaccine impact. Focusing on influenza vaccines as an example that well illustrates many of the relevant points, this article considers current production, distribution, access and other factors that ultimately impact on vaccine uptake and population-level effectiveness. PMID:21664680

  15. Evaluation of an Online Instructional Database Accessed by QR Codes to Support Biochemistry Practical Laboratory Classes

    Science.gov (United States)

    Yip, Tor; Melling, Louise; Shaw, Kirsty J.

    2016-01-01

    An online instructional database containing information on commonly used pieces of laboratory equipment was created. In order to make the database highly accessible and to promote its use, QR codes were utilized. The instructional materials were available anytime and accessed using QR codes located on the equipment itself and within undergraduate…

  16. Remote access to ACNUC nucleotide and protein sequence databases at PBIL.

    Science.gov (United States)

    Gouy, Manolo; Delmotte, Stéphane

    2008-04-01

    The ACNUC biological sequence database system provides powerful and fast query and extraction capabilities to a variety of nucleotide and protein sequence databases. The collection of ACNUC databases served by the Pôle Bio-Informatique Lyonnais includes the EMBL, GenBank, RefSeq and UniProt nucleotide and protein sequence databases and a series of other sequence databases that support comparative genomics analyses: HOVERGEN and HOGENOM containing families of homologous protein-coding genes from vertebrate and prokaryotic genomes, respectively; Ensembl and Genome Reviews for analyses of prokaryotic and of selected eukaryotic genomes. This report describes the main features of the ACNUC system and the access to ACNUC databases from any internet-connected computer. Such access was made possible by the definition of a remote ACNUC access protocol and the implementation of Application Programming Interfaces between the C, Python and R languages and this communication protocol. Two retrieval programs for ACNUC databases, Query_win, with a graphical user interface and raa_query, with a command line interface, are also described. Altogether, these bioinformatics tools provide users with either ready-to-use means of querying remote sequence databases through a variety of selection criteria, or a simple way to endow application programs with an extensive access to these databases. Remote access to ACNUC databases is open to all and fully documented (http://pbil.univ-lyon1.fr/databases/acnuc/acnuc.html).

  17. Enhancing NTIS Database Access at a Multi-Campus University.

    Science.gov (United States)

    Conkling, Thomas W.; Jordan, Kelly

    1997-01-01

    The Pennsylvania State University Libraries and the National Technical Information Service (NTIS) collaborated to bring the entire NTIS bibliographic database online on the University-wide information system and make it available for searching at all 21 Pennsylvania State campuses. This article also reviews the level of database and technical…

  18. Smart Card Identification Management Over A Distributed Database Model

    Directory of Open Access Journals (Sweden)

    Olatubosun Olabode

    2011-01-01

    Full Text Available Problem statement: An effective national identification system is a necessity in any national government for the proper implementation and execution of its governmental policies and duties. Approach: Such data can be held in a database relation in a distributed database environment. Till date, The Nigerian government is yet to have an effective and efficient National Identification Management System despite the huge among of money expended on the project. Results: This article presents a Smart Card Identification Management System over a Distributed Database Model. The model was implemented using a client/server architecture between a server and multiple clients. The programmable smart card to store identification detail, including the biometric feature was proposed. Among many other variables stored in the smart card includes individual information on personal identification number, gender, date of birth, place of birth, place of residence, citizenship, continuously updated information on vital status and the identity of parents and spouses. Conclusion/Recommendations: A conceptualization of the database structures and architecture of the distributed database model is presented. The designed distributed database model was intended to solve the lingering problems associated with multiple identification in a society.

  19. Multiple-Access Quantum Key Distribution Networks

    CERN Document Server

    Razavi, Mohsen

    2011-01-01

    This paper addresses multi-user quantum key distribution networks, in which any two users can mutually exchange a secret key without trusting any other nodes. The same network also supports conventional classical communications by assigning two different wavelength bands to quantum and classical signals. Time and code division multiple access (CDMA) techniques, within a passive star network, are considered. In the case of CDMA, it turns out that the optimal performance is achieved at a unity code weight. A listen-before-send protocol is then proposed to improve secret key generation rates in this case. Finally, a hybrid setup with wavelength routers and passive optical networks, which can support a large number of users, is considered and analyzed.

  20. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  2. Strategies for the sustainability of online open-access biodiversity databases

    OpenAIRE

    Costello, Mark J.; Appeltans, Ward; Bailly, Nicolas; Berendsohn, Walter G.; de Jong, Yde; Edwards, Martin; Froese, Rainer; Huettmann, Falk; Los, Wouter; Mees, Jan; Segers, Hendrik; Bisby, Frank A.

    2014-01-01

    Highlights: • Open-access online scholarly biodiversity databases are threatened by a lack of funding and institutional support. • Strategic approaches to aid sustainability are summarised. • Issues include database coverage, quality, uniqueness; clarity of Intellectual Property Rights, ownership and governance. • Long-term support from institutions and scientists is easier for high-quality, comprehensive, prestigious global databases. • Larger multi-partner governed databases ...

  3. Viewpoints: a framework for object oriented database modelling and distribution

    Directory of Open Access Journals (Sweden)

    Fouzia Benchikha

    2006-01-01

    Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.

  4. Distributed Database Control and Allocation. Volume 3. Distributed Database System Designer’s Handbook.

    Science.gov (United States)

    1983-10-01

    Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each

  5. HCUP State Inpatient Databases (SID) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Inpatient Databases (SID) contain the universe of hospital inpatient discharge abstracts in States participating in HCUP that release their data through...

  6. NODC Standard Product: World ocean database 2005 (NODC Accession 0099241)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Ocean Database 2005 (WOD05) DVD contains data, documentation, programs, and utilities for the latest release of this product. Data include 7.9 million...

  7. HCUP State Ambulatory Surgery Databases (SASD) - Restricted Access Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Ambulatory Surgery Databases (SASD) contain the universe of hospital-based ambulatory surgery encounters in participating States. Some States include...

  8. World-wide ocean optics database WOOD (NODC Accession 0092528)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — WOOD was developed to be a comprehensive publicly-available oceanographic bio-optical database providing global coverage. It includes nearly 250 major data sources...

  9. Global Ocean Currents Database (GOCD) (NCEI Accession 0093183)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ocean Currents Database (GOCD) is a collection of quality controlled ocean current measurements such as observed current direction and speed obtained from...

  10. Dynamic fragmentation and query translation based security framework for distributed databases

    Directory of Open Access Journals (Sweden)

    Arunabha Sengupta

    2015-09-01

    Full Text Available The existing security models for distributed databases suffer from several drawbacks viz. tight coupling with the choice of database; lack of dynamism, granularity and flexibility; non scalability and vulnerability to intrusion attacks. There is a lack of an integrated flexible and interoperable security framework that can dynamically control access to table, row, column and field level data entity. The objective of this proposed framework is to address the issue of security in distributed query processing using the dynamic fragmentation and query translation methodologies based on a parameterized security model which could be tailored based on the business requirements to take care of relational level, record level, column level as well as the atomic data element level security and access requirements. This solution has been implemented and tested for DML operations on distributed relational databases and the execution results are found to be very promising in terms of restricting access to data elements with higher security clearance; blocking queries that return data at/below user’s level but its evaluation requires accessing columns/rows with higher security clearance; and blocking aggregate queries used for inferring classified information.

  11. Design and Implementation of a Heterogeneous Distributed Database System

    Institute of Scientific and Technical Information of China (English)

    金志权; 柳诚飞; 等

    1990-01-01

    This paper introduces a heterogeneous distributed database system called LSZ system,where LSZ is an abbreviation of Li Shizhen,an ancient Chinese medical scientist.LSZ system adopts cluster as distributed database node(or site).Each cluster consists of one of several microcomputers and one server.Te paper describes its basic architecture and the prototype implementation,which includes query processing and optimization,transaction manager and data language translation.The system provides a uniform retrieve and update user interface through global relational data language GRDL.

  12. An Access Path Model for Physical Database Design.

    Science.gov (United States)

    1979-12-28

    target system. 4.1 Algebraic Structure for Physical Design For the purposes of implementation-oriented design, we shall use the logical access paths...subsection, we present an algorithm for gen- erating a maximal labelling that specifies superior support for the access paths most heavily travelled. Assume...A.C.M. SIGMOD Conf., (May 79). [CARD731 Cardenas , A. F., "Evaluation and Selection of File Organization - A Model and a System," Comm. A.C.M., V 16, N

  13. Natural Language Access to Databases: Interpreting Update Requests

    Science.gov (United States)

    1981-09-30

    Database Systems" TR 11-79, Center for Research in Cjtnputing Technology, Harvard University, 1979. Frege, Gottlob : "On Sense and Reference": trans...Max Black, in Translations from the Philosophical V _ 22 Writinge of Gottlob Frege, P. Geach and M1. Black, eds., Bltackwell, Oxford, 1952 Hammer

  14. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  15. Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.

    Science.gov (United States)

    Stallard, A. P.; Smith, S. R.; Elya, J. L.

    2016-12-01

    The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex

  16. Crystallography Open Database (COD): an open-access collection of crystal structures and platform for world-wide collaboration.

    Science.gov (United States)

    Gražulis, Saulius; Daškevič, Adriana; Merkys, Andrius; Chateigner, Daniel; Lutterotti, Luca; Quirós, Miguel; Serebryanaya, Nadezhda R; Moeck, Peter; Downs, Robert T; Le Bail, Armel

    2012-01-01

    Using an open-access distribution model, the Crystallography Open Database (COD, http://www.crystallography.net) collects all known 'small molecule / small to medium sized unit cell' crystal structures and makes them available freely on the Internet. As of today, the COD has aggregated ~150,000 structures, offering basic search capabilities and the possibility to download the whole database, or parts thereof using a variety of standard open communication protocols. A newly developed website provides capabilities for all registered users to deposit published and so far unpublished structures as personal communications or pre-publication depositions. Such a setup enables extension of the COD database by many users simultaneously. This increases the possibilities for growth of the COD database, and is the first step towards establishing a world wide Internet-based collaborative platform dedicated to the collection and curation of structural knowledge.

  17. Open-Access Metabolomics Databases for Natural Product Research: Present Capabilities and Future Potential

    Science.gov (United States)

    Johnson, Sean R.; Lange, Bernd Markus

    2015-01-01

    Various databases have been developed to aid in assigning structures to spectral peaks observed in metabolomics experiments. In this review article, we discuss the utility of currently available open-access spectral and chemical databases for natural products discovery. We also provide recommendations on how the research community can contribute to further improvements. PMID:25789275

  18. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    DEFF Research Database (Denmark)

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T

    2009-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database...

  19. The Personal Sequence Database: a suite of tools to create and maintain web-accessible sequence databases

    Directory of Open Access Journals (Sweden)

    Sullivan Christopher M

    2007-12-01

    Full Text Available Abstract Background Large molecular sequence databases are fundamental resources for modern bioscientists. Whether for project-specific purposes or sharing data with colleagues, it is often advantageous to maintain smaller sequence databases. However, this is usually not an easy task for the average bench scientist. Results We present the Personal Sequence Database (PSD, a suite of tools to create and maintain small- to medium-sized web-accessible sequence databases. All interactions with PSD tools occur via the internet with a web browser. Users may define sequence groups within their database that can be maintained privately or published to the web for public use. A sequence group can be downloaded, browsed, searched by keyword or searched for sequence similarities using BLAST. Publishing a sequence group extends these capabilities to colleagues and collaborators. In addition to being able to manage their own sequence databases, users can enroll sequences in BLASTAgent, a BLAST hit tracking system, to monitor NCBI databases for new entries displaying a specified level of nucleotide or amino acid similarity. Conclusion The PSD offers a valuable set of resources unavailable elsewhere. In addition to managing sequence data and BLAST search results, it facilitates data sharing with colleagues, collaborators and public users. The PSD is hosted by the authors and is available at http://bioinfo.cgrb.oregonstate.edu/psd/.

  20. Mars-Learning AN Open Access Educational Database

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.

    2016-12-01

    Schools across America have begun focusing more and more on science and technology, giving their students greater opportunities to learn about planetary science and engineering. With the development of rovers and advanced scientific instrumentation, we are learning about Mars' geologic history on a daily basis. These discoveries are crucial to our understanding of Earth and our solar system. By bringing these findings into the classroom, students can learn key concepts about Earth and Planetary sciences while focusing on a relevant current event. However, with an influx of readily accessible information, it is difficult for educators and students to find accurate and relevant material. Mars-Learning seeks to unify these discoveries and resources. This site will provide links to educational resources, software, and blogs with a focus on Mars. Activities will be grouped by grade for the middle and high school levels. Programs and software will be labeled, open access, free, or paid to ensure users have the proper tools to get the information they need. For new educators or those new to the subject, relevant blogs and pre-made lesson plans will be available so instructors can ensure their success. The expectation of Mars-Learning is to provide stress-free access to learning materials that falls within a wide range of curriculum. By providing a thorough and encompassing site, Mars-Learning hopes to further our understanding of the Red Planet and equip students with the knowledge and passion to continue this research.

  1. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  2. Displaying bias in sampling effort of data accessed from biodiversity databases using ignorance maps.

    Science.gov (United States)

    Ruete, Alejandro

    2015-01-01

    Open-access biodiversity databases including mainly citizen science data make temporally and spatially extensive species' observation data available to a wide range of users. Such data have limitations however, which include: sampling bias in favour of recorder distribution, lack of survey effort assessment, and lack of coverage of the distribution of all organisms. These limitations are not always recorded, while any technical assessment or scientific research based on such data should include an evaluation of the uncertainty of its source data and researchers should acknowledge this information in their analysis. The here proposed maps of ignorance are a critical and easy way to implement a tool to not only visually explore the quality of the data, but also to filter out unreliable results. I present simple algorithms to display ignorance maps as a tool to report the spatial distribution of the bias and lack of sampling effort across a study region. Ignorance scores are expressed solely based on raw data in order to rely on the fewest assumptions possible. Therefore there is no prediction or estimation involved. The rationale is based on the assumption that it is appropriate to use species groups as a surrogate for sampling effort because it is likely that an entire group of species observed by similar methods will share similar bias. Simple algorithms are then used to transform raw data into ignorance scores scaled 0-1 that are easily comparable and scalable. Because of the need to perform calculations over big datasets, simplicity is crucial for web-based implementations on infrastructures for biodiversity information. With these algorithms, any infrastructure for biodiversity information can offer a quality report of the observations accessed through them. Users can specify a reference taxonomic group and a time frame according to the research question. The potential of this tool lies in the simplicity of its algorithms and in the lack of assumptions made

  3. Semi-Distributed Vacuuming Model on Temporal Database (SDVMT

    Directory of Open Access Journals (Sweden)

    Mohammad Shabanali FAMI

    2012-12-01

    Full Text Available Temporal database is one of the most common types of databases. Portfolio management, accounting, storage, treatment management systems, aerology systems and scheduling are applications which their data have time references. Temporal nature of data and increasing size of temporal databases due to non-removal data requires presenting a solution to overcome this limitation. In this research, firstly the current model of vacuuming systems are simulated and analyzed. Then the proposed model introduced for vacuuming systems using distribution concepts. This model is simulated in the same conditions with current model. Using experimental results, advantages and disadvantages of both models were investigated. The proposed model is more capable than the current model in answering temporal queries. Its response time to temporal queries is less than the current model. But the proposed model's cost is more than the current model. Considering the possibility of idle resources usage in organizations, these costs can be ignored along with optimize usage of facilities.

  4. NCBI2RDF: enabling full RDF-based access to NCBI databases.

    Science.gov (United States)

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.

  5. Electronic asthma action plan database: asthma action plan development using Microsoft Access.

    Science.gov (United States)

    Mangold, Rita A; Salzman, Gary A

    2005-04-01

    We created a user-friendly database for use with asthma management consistent with the national guidelines for asthma. A database was designed by using Microsoft Access for the creation of asthma action plans that can be shared between providers caring for patients with asthma. This database and the use of "form entry" improved documentation of asthma action plans, which are increasingly being used to assess appropriateness of care. We currently have 400 asthma action plans in the database. These action plans can be queried to document compliance with accepted best practices. Asthma action plans can be created and stored in an Access database that is both user-friendly and that can be networked to provide more consistent asthma care.

  6. Open-access databases as unprecedented resources and drivers of cultural change in fisheries science

    Energy Technology Data Exchange (ETDEWEB)

    McManamay, Ryan A [ORNL; Utz, Ryan [National Ecological Observatory Network

    2014-01-01

    Open-access databases with utility in fisheries science have grown exponentially in quantity and scope over the past decade, with profound impacts to our discipline. The management, distillation, and sharing of an exponentially growing stream of open-access data represents several fundamental challenges in fisheries science. Many of the currently available open-access resources may not be universally known among fisheries scientists. We therefore introduce many national- and global-scale open-access databases with applications in fisheries science and provide an example of how they can be harnessed to perform valuable analyses without additional field efforts. We also discuss how the development, maintenance, and utilization of open-access data are likely to pose technical, financial, and educational challenges to fisheries scientists. Such cultural implications that will coincide with the rapidly increasing availability of free data should compel the American Fisheries Society to actively address these problems now to help ease the forthcoming cultural transition.

  7. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA

    Directory of Open Access Journals (Sweden)

    Jyotsna Choubey

    2017-01-01

    Results and Conclusions: RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  8. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  9. Cryptographically Enforced Distributed Data Access Control

    NARCIS (Netherlands)

    Ibraimi, L.

    2011-01-01

    Outsourcing data storage reduces the cost of ownership. However, once data is stored on a remote server, users lose control over their sensitive data. There are two approaches to control the access to outsourced data. The first approach assumes that the outsourcee is fully trusted. This approach is

  10. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  11. Income distribution patterns from a complete social security database

    CERN Document Server

    Derzsy, N; Santos, M A

    2012-01-01

    We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto's law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around $\\alpha \\approx 2.5$, in spite of the fact that during this period the economy developed rapidly and also a financial-econ...

  12. Access to digital library databases in higher education: design problems and infrastructural gaps.

    Science.gov (United States)

    Oswal, Sushil K

    2014-01-01

    After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.

  13. Access Control for Agent-based Computing: A Distributed Approach.

    Science.gov (United States)

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  14. MINEs: open access databases of computationally predicted enzyme promiscuity products for untargeted metabolomics.

    Science.gov (United States)

    Jeffryes, James G; Colastani, Ricardo L; Elbadawi-Sidhu, Mona; Kind, Tobias; Niehaus, Thomas D; Broadbelt, Linda J; Hanson, Andrew D; Fiehn, Oliver; Tyo, Keith E J; Henry, Christopher S

    2015-01-01

    In spite of its great promise, metabolomics has proven difficult to execute in an untargeted and generalizable manner. Liquid chromatography-mass spectrometry (LC-MS) has made it possible to gather data on thousands of cellular metabolites. However, matching metabolites to their spectral features continues to be a bottleneck, meaning that much of the collected information remains uninterpreted and that new metabolites are seldom discovered in untargeted studies. These challenges require new approaches that consider compounds beyond those available in curated biochemistry databases. Here we present Metabolic In silico Network Expansions (MINEs), an extension of known metabolite databases to include molecules that have not been observed, but are likely to occur based on known metabolites and common biochemical reactions. We utilize an algorithm called the Biochemical Network Integrated Computational Explorer (BNICE) and expert-curated reaction rules based on the Enzyme Commission classification system to propose the novel chemical structures and reactions that comprise MINE databases. Starting from the Kyoto Encyclopedia of Genes and Genomes (KEGG) COMPOUND database, the MINE contains over 571,000 compounds, of which 93% are not present in the PubChem database. However, these MINE compounds have on average higher structural similarity to natural products than compounds from KEGG or PubChem. MINE databases were able to propose annotations for 98.6% of a set of 667 MassBank spectra, 14% more than KEGG alone and equivalent to PubChem while returning far fewer candidates per spectra than PubChem (46 vs. 1715 median candidates). Application of MINEs to LC-MS accurate mass data enabled the identity of an unknown peak to be confidently predicted. MINE databases are freely accessible for non-commercial use via user-friendly web-tools at http://minedatabase.mcs.anl.gov and developer-friendly APIs. MINEs improve metabolomics peak identification as compared to general chemical

  15. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  16. Database Access Manager for the Software Engineering Laboratory (DAMSEL) user's guide

    Science.gov (United States)

    1990-01-01

    Operating instructions for the Database Access Manager for the Software Engineering Laboratory (DAMSEL) system are presented. Step-by-step instructions for performing various data entry and report generation activities are included. Sample sessions showing the user interface display screens are also included. Instructions for generating reports are accompanied by sample outputs for each of the reports. The document groups the available software functions by the classes of users that may access them.

  17. Distributed Role-based Access Control for Coaliagion Application

    Institute of Scientific and Technical Information of China (English)

    HONG Fan; ZHU Xian; XING Guanglin

    2005-01-01

    Access control in multi-domain environments is one of the important questions of building coalition between domains.On the basis of RBAC access control model, the concepts of role delegation and role mapping are proposed, which support the third-party authorization.Then, a distributed RBAC model is presented.Finally the implementation issues are discussed.

  18. SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access

    Directory of Open Access Journals (Sweden)

    Carracedo Ángel

    2008-10-01

    Full Text Available Abstract Background In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. Results We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs in widespread use in human population genetics: SPSmart (SNPs for Population Studies. A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 × 109 genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. Conclusion In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full

  19. Resolving the problem of multiple accessions of the same transcript deposited across various public databases.

    Science.gov (United States)

    Weirick, Tyler; John, David; Uchida, Shizuka

    2017-03-01

    Maintaining the consistency of genomic annotations is an increasingly complex task because of the iterative and dynamic nature of assembly and annotation, growing numbers of biological databases and insufficient integration of annotations across databases. As information exchange among databases is poor, a 'novel' sequence from one reference annotation could be annotated in another. Furthermore, relationships to nearby or overlapping annotated transcripts are even more complicated when using different genome assemblies. To better understand these problems, we surveyed current and previous versions of genomic assemblies and annotations across a number of public databases containing long noncoding RNA. We identified numerous discrepancies of transcripts regarding their genomic locations, transcript lengths and identifiers. Further investigation showed that the positional differences between reference annotations of essentially the same transcript could lead to differences in its measured expression at the RNA level. To aid in resolving these problems, we present the algorithm 'Universal Genomic Accession Hash (UGAHash)' and created an open source web tool to encourage the usage of the UGAHash algorithm. The UGAHash web tool (http://ugahash.uni-frankfurt.de) can be accessed freely without registration. The web tool allows researchers to generate Universal Genomic Accessions for genomic features or to explore annotations deposited in the public databases of the past and present versions. We anticipate that the UGAHash web tool will be a valuable tool to check for the existence of transcripts before judging the newly discovered transcripts as novel. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. SciELO, Scientific Electronic Library Online, a Database of Open Access Journals

    Science.gov (United States)

    Meneghini, Rogerio

    2013-01-01

    This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…

  1. Data is key: introducing the data-based access control paradigm

    NARCIS (Netherlands)

    Pieters, Wolter; Tang, Qiang

    2009-01-01

    According to the Jericho forum, the trend in information security is moving the security perimeter as close to the data as possible. In this context, we suggest the idea of data-based access control, where decryption of data is made possible by knowing enough of the data. Trust is thus based on what

  2. Toward an open-access global database for mapping, control, and surveillance of neglected tropical diseases

    DEFF Research Database (Denmark)

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina

    2011-01-01

    for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken...

  3. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases

    DEFF Research Database (Denmark)

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering...... dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional...

  4. How to Access Spectral Line Databases in the IVOA: SLAP Services in VOSpec

    Science.gov (United States)

    Salgado, J.; Osuna, P.; Guainazzi, M.; Barbarisi, I.; Arviset, C.

    2007-10-01

    In an action led by the ESA-VO project and VO-France, the International Virtual Observatory Alliance (IVOA) is defining the access to spectral line data bases, both theoretical and observational. Two standards are in development, the SLAP (Simple Line Access Protocol) document and the Atomic and Molecular Spectral Line Data Model document. The first standard defines uniform access to spectral line data bases while the second specifies a common universal language for information interchange. The SLAP and the already existing SSAP (Simple Spectrum Access Protocol), integrated into the same VO application, are a powerful combination for astronomical spectral studies. Some very well known spectral line data bases have already implemented SLAP services on their servers, e.g., the NIST Atomic Spectra Database (theoretical), LERMA (observational) or the IASD (Infrared Astronomical Spectral Database) (observational). Other projects, such as ALMA (Atacama Large Millimeter Array), are preparing their databases to be as close as possible to the Spectral Line Data Model and are planning to expose their data in SLAP format. We summarize the content of both the SLAP and AM Line Data Model documents and how these SLAP services have been integrated in VOSpec, the VO reference application for spectral access developed by the ESA-VO team.

  5. Open Access This is an Open Access article distributed under the ...

    African Journals Online (AJOL)

    Open Access This is an Open Access article distributed under the terms of the ... ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES ... the capacity and contribution of Nigerian Nurses to health care research, it is ... Aim: To review the academic and research preparedness of Nigerian nurses in ...

  6. Accessing the SEED Genome Databases via Web Services API: Tools for Programmers

    Directory of Open Access Journals (Sweden)

    Vonstein Veronika

    2010-06-01

    Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  7. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  8. LBVS: an online platform for ligand-based virtual screening using publicly accessible databases.

    Science.gov (United States)

    Zheng, Minghao; Liu, Zhihong; Yan, Xin; Ding, Qianzhi; Gu, Qiong; Xu, Jun

    2014-11-01

    Abundant data on compound bioactivity and publicly accessible chemical databases increase opportunities for ligand-based drug discovery. In order to make full use of the data, an online platform for ligand-based virtual screening (LBVS) using publicly accessible databases has been developed. LBVS adopts Bayesian learning approach to create virtual screening models because of its noise tolerance, speed, and efficiency in extracting knowledge from data. LBVS currently includes data derived from BindingDB and ChEMBL. Three validation approaches have been employed to evaluate the virtual screening models created from LBVS. The tenfold cross validation results of twenty different LBVS models demonstrate that LBVS achieves an average AUC value of 0.86. Our internal and external testing results indicate that LBVS is predictive for lead identifications. LBVS can be publicly accessed at http://rcdd.sysu.edu.cn/lbvs.

  9. 浅谈Access数据库安全策略%On Access Database Security Policy

    Institute of Scientific and Technical Information of China (English)

    贾鑫

    2014-01-01

    当前,市场上数据库系统软件种类非常多,Access是Microsoft旗下的一款小型数据库系统软件,其特点是使用简便、体积较小,非常适合于信息量较少的数据信息管理。在Access的使用过程当中,我们必须要考虑到的一个问题就是数据库的安全问题,为了能够保证Access数据库数据信息的安全性与完整性,就必须要采取有效的安全策略来对Access数据库进行保护。基于作者自身的实际工作经验与相关知识了解,首先对Access数据库软件系统的安全定义与安全风险进行了分析,然后针对性地提出了部分安全策略,以期对Access数据库的安全质量起到提升作用。%Currently,the market has many types of database system software. Access is a small database system software under Microsoft, which is characterized by ease of use, smaller,and less information is very suitable for data and information management. In the course of Access ,it must take into account that the problem is the security issue of the database,in order to ensure the security and integrity of information in an Access database data. And it must take effective security strategy to protect the Access database . Based on the author's own practical experience and knowledge to understand,the definition of safety and security risks Access database software systems are analyzed firstly,and then a part of the targeted security policy is made in order to secure the quality of Access database to enhance the role played .

  10. GlycomeDB – integration of open-access carbohydrate structure databases

    Directory of Open Access Journals (Sweden)

    von der Lieth Claus-Wilhelm

    2008-09-01

    Full Text Available Abstract Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG, the Kyoto Encyclopedia of Genes and Genomes (KEGG and the Bacterial Carbohydrate Structure Database (BCSDB. We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource.

  11. Distributed Database Kriging for Adaptive Sampling (D2 KAS)

    Science.gov (United States)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-07-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.

  12. Optimizing joins between two partitioned relations in distributed databases

    Energy Technology Data Exchange (ETDEWEB)

    Ceri, S.; Gottlob, G.

    1986-06-01

    In this paper, the authors analyzed the optimization of joins between two partitioned relations in distributed databases. They discuss the ''semantic'' optimization of joins, semi-join reduction, and join optimization. Finally, they have presented an overall procedure for join optimization which uses semi-join reduction and join optimization as building blocks. Several results of this paper, such as the principles of semantic optimization and Proposition 1, are simple in nature but appear to be very important from an application viewpoint. The join optimization program presented in Section 5 can be realistically applied, because the authors introduced a simplification of the program which reduces the number of decision variables.

  13. WiSPR - A Graphical User Interface for Accessing a Sybase Database

    Science.gov (United States)

    Williamson, Ramon L., II

    WiSPR is a Tcl/Tk script in the X environment that builds a display of the fields of an arbitrary database table in an easy-to-read format on the fly. Each field of the database table has a text widget into which search strings for that field may be entered. When all desired search fields have been filled, an SQL query is constructed and values are returned into the text widgets one row at a time. Subsequent matches to the query are displayed until all matching rows have been retrieved. WiSPR recognizes all SQL wildcards when doing queries, and once values are returned, the results may be saved to a file or printed on a printer. In addition, if write access to the database in question is available, the user can add new rows to the database or update entries in a row already retrieved. On-line help is available at anytime using a menu-driven help system. Sybase database access is accomplished by means of the sybtcl library, written as an extension to Tcl/Tk by Tom Poindexter of Denver Colorado.

  14. Accessing the quark orbital angular momentum with Wigner distributions

    CERN Document Server

    Cedric, Lorce

    2012-01-01

    The quark orbital angular momentum (OAM) has been recognized as an important piece of the proton spin puzzle. A lot of effort has been invested in trying to extract it quantitatively from the generalized parton distributions (GPDs) and the transverse-momentum dependent parton distributions (TMDs), which are accessed in high-energy processes and provide three-dimensional pictures of the nucleon. Recently, we have shown that it is more natural to access the quark OAM from the phase-space or Wigner distributions. We discuss the concept of Wigner distributions in the context of quantum field theory and show how they are related to the GPDs and the TMDs. We summarize the different definitions discussed in the literature for the quark OAM and show how they can in principle be extracted from the Wigner distributions.

  15. Multi-Dimensional Bitmap Indices for Optimising Data Access within Object Oriented Databases at CERN

    CERN Document Server

    Stockinger, K

    2001-01-01

    Efficient query processing in high-dimensional search spaces is an important requirement for many analysis tools. In the literature on index data structures one can find a wide range of methods for optimising database access. In particular, bitmap indices have recently gained substantial popularity in data warehouse applications with large amounts of read mostly data. Bitmap indices are implemented in various commercial database products and are used for querying typical business applications. However, scientific data that is mostly characterised by non-discrete attribute values cannot be queried efficiently by the techniques currently supported. In this thesis we propose a novel access method based on bitmap indices that efficiently handles multi-dimensional queries against typical scientific data. The algorithm is called GenericRangeEval and is an extension of a bitmap index for discrete attribute values. By means of a cost model we study the performance of queries with various selectivities against uniform...

  16. A High Speed Mobile Courier Data Access System That Processes Database Queries in Real-Time

    Science.gov (United States)

    Gatsheni, Barnabas Ndlovu; Mabizela, Zwelakhe

    A secure high-speed query processing mobile courier data access (MCDA) system for a Courier Company has been developed. This system uses the wireless networks in combination with wired networks for updating a live database at the courier centre in real-time by an offsite worker (the Courier). The system is protected by VPN based on IPsec. There is no system that we know of to date that performs the task for the courier as proposed in this paper.

  17. Characterizing Journal Access at a Canadian University Using the Journal Citation Reports Database

    OpenAIRE

    Alan Gale; Linda Day

    2011-01-01

    This article outlines a simple approach to characterizing the level of access to the scholarly journal literature in the physical sciences and engineering offered by a research library, particularly within the Canadian university system. The method utilizes the “Journal Citation Reports” (JCR) database to produce lists of journals, ranked based on total citations, in the subject areas of interest. Details of the approach are illustrated using data from the University of Guelph. The examp...

  18. The Importance of Access to the SCOPUS Database in the Economic Crisis Condition

    Directory of Open Access Journals (Sweden)

    Atefeh Kalantari

    2014-06-01

    Full Text Available Data acquisition and delivering information resources to clients are among the most vital functions of libraries. However, current political and economic crisis has imposed unpleasant effects on these functions. In such conditions, selecting information resources becomes an ever more complex task. This research intends to answer the question on whether or not the purchase of Scopus services in such a crisis, which makes the subscription of credible scientific journals much more difficult, is a beneficial choice for Iranian medical libraries. The problem will be answered by analyzing the accessibility to full text articles via “view at publisher” image links in Scopus database. Different studies have already been carried out on the abilities and features of Scopus database in scientometric and citation analysis. However, it seems that current study is the first research to examine the value and importance of existing link for accessing the full-text articles such as "view at publisher" link. This link is one of the remarkable features devised in Scopus which has a special importance for end users. Hence, access ratio to full-text documents cited in articles written by Iranian medical faculties was studied through the "view at publisher" link in Scopus, and also via a link devised in the A-Z list of full-text journals in the Iranian National Medical Digital Library database, available at URL: www.inlm.org, followed by comparing the results obtained through the study. Results showed the ability of Scopus to make full-text articles accessible for users' depending on the type and level of individuals or institutional subscription. Such ability itself could justify the necessity for subscribing to Scopus by the universities of medical sciences. Regardless of other features of Scopus such as scientometric studies, etc., this ability becomes more important when access to some articles depends on paying subscription fee either privately or

  19. Quality, language, subdiscipline and promotion were associated with article accesses on Physiotherapy Evidence Database (PEDro).

    Science.gov (United States)

    Yamato, Tiê P; Arora, Mohit; Stevens, Matthew L; Elkins, Mark R; Moseley, Anne M

    2017-08-12

    To quantify the relationship between the number of times articles are accessed on the Physiotherapy Evidence Database (PEDro) and the article characteristics. A secondary aim was to examine the relationship between accesses and the number of citations of articles. The study was conducted to derive prediction models for the number of accesses of articles indexed on PEDro from factors that may influence an article's accesses. All articles available on PEDro from August 2014 to January 2015 were included. We extracted variables relating to the algorithm used to present PEDro search results (research design, year of publication, PEDro score, source of systematic review (Cochrane or non-Cochrane)) plus language, subdiscipline of physiotherapy, and whether articles were promoted to PEDro users. Three predictive models were examined using multiple regression analysis. Citation and journal impact factor were downloaded. There were 29,313 articles indexed in this period. We identified seven factors that predicted the number of accesses. More accesses were noted for factors related to the algorithm used to present PEDro search results (synthesis research (i.e., guidelines and reviews), recent articles, Cochrane reviews, and higher PEDro score) plus publication in English and being promoted to PEDro users. The musculoskeletal, neurology, orthopaedics, sports, and paediatrics subdisciplines were associated with more accesses. We also found that there was no association between number of accesses and citations. The number of times an article is accessed on PEDro is partly predicted by how condensed and high quality the evidence it contains is. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  20. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  1. Distributed reservation control protocols for random access broadcasting channels

    Science.gov (United States)

    Greene, E. P.; Ephremides, A.

    1981-05-01

    Attention is given to a communication network consisting of an arbitrary number of nodes which can communicate with each other via a time-division multiple access (TDMA) broadcast channel. The reported investigation is concerned with the development of efficient distributed multiple access protocols for traffic consisting primarily of single packet messages in a datagram mode of operation. The motivation for the design of the protocols came from the consideration of efficient multiple access utilization of moderate to high bandwidth (4-40 Mbit/s capacity) communication satellite channels used for the transmission of short (1000-10,000 bits) fixed length packets. Under these circumstances, the ratio of roundtrip propagation time to packet transmission time is between 100 to 10,000. It is shown how a TDMA channel can be adaptively shared by datagram traffic and constant bandwidth users such as in digital voice applications. The distributed reservation control protocols described are a hybrid between contention and reservation protocols.

  2. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Directory of Open Access Journals (Sweden)

    Mishima Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The University of California, Santa Cruz (UCSC genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser and several means for programmatic queries. A simple application programming interface (API in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby. Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  3. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States.

    Science.gov (United States)

    Troia, Matthew J; McManamay, Ryan A

    2016-07-01

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records from the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). We aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov-Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. This

  4. BioSYNTHESIS: access to a knowledge network of health sciences databases.

    Science.gov (United States)

    Broering, N C; Hylton, J S; Guttmann, R; Eskridge, D

    1991-04-01

    Users of the IAIMS Knowledge Network at the Georgetown University Medical Center have access to multiple in-house and external databases from a single point of entry through BioSYNTHESIS. The IAIMS project has developed a rich environment of biomedical information resources that represent a medical decision support system for campus physicians and students. The BioSYNTHESIS system is an information navigator that provides transparent access to a Knowledge Network of over a dozen databases. These multiple health sciences databases consist of bibliographic, informational, diagnostic, and research systems which reside on diverse computers such as DEC VAXs, SUN 490, AT&T 3B2s, Macintoshes, IBM PC/PS2s and the AT&T ISN and SYTEK network systems. Ethernet and TCP/IP protocols are used in the network architecture. BioSYNTHESIS also provides network links to the other campus libraries and to external institutions. As additional knowledge resources and technological advances have become available. BioSYNTHESIS has evolved from a two phase to a three phase program. Major components of the system including recent achievements and future plans are described.

  5. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  6. A Model-driven Role-based Access Control for SQL Databases

    Directory of Open Access Journals (Sweden)

    Raimundas Matulevičius

    2015-07-01

    Full Text Available Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC, which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.

  7. Study on Mandatory Access Control in a Secure Database Management System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation-hierarchical data model is extended to multilevel relation-hierarchical data model. Based on the multilevel relation-hierarchical data model, the concept of upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation-hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects (e. g., multilevel spatial data) and multilevel conventional data ( e. g., integer. real number and character string).

  8. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    CERN Document Server

    Barberis, D; The ATLAS collaboration; de Stefano, J; Dewhurst, A L; Dykstra, D; Front, D

    2012-01-01

    The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond just user analysis and subsyste...

  9. Toward an open-access global database for mapping, control, and surveillance of neglected tropical diseases

    DEFF Research Database (Denmark)

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina;

    2011-01-01

    After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark...... for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken...

  10. Design of Nutrition Catering System for Athletes Based on Access Database

    Directory of Open Access Journals (Sweden)

    Hongjiang Wu,

    2015-08-01

    Full Text Available In order to monitor and adjust athletes' dietary nutrition scientifically, Active X Data Object (ADO and Structure Query Language (SQL were used to produce program under the development environment of Visual Basic 6.0 and Access database. The consulting system on food nutrition and dietary had been developed with the two languages combination and organization of the latest nutrition information. Nutrition balance of physiological characteristics, assessment for nutrition intake, inquiring nutrition of common food and recommended of functional nourishing food could be achieved for different events and different level of athletes.

  11. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  12. Internet中数据库访问方法%The Methods of Accessing Database through Internet

    Institute of Scientific and Technical Information of China (English)

    李惠欢

    2001-01-01

    介绍了在Internet中可以访问数据库的几种方法,包括CGI,JDBC,IDC等,以及各种方法的工作方式、特点、目前支持的厂商等。结合Sybase数据库和广东省仪器设备系统,用各种方法进行实际应用,并分析结果。%This paper introduces the methods of accessing database through Internet, including CGI, JDBC, IDC, etc. Describes how they work, their specialties and their suppliers. Using these methods we access Sybase SQL Server through Internet and analyze the result.

  13. Area and Flux Distributions of Active Regions, Sunspot Groups, and Sunspots: A Multi-Database Study

    CERN Document Server

    Muñoz-Jaramillo, Andrés; Windmueller, John C; Amouzou, Ernest C; Longcope, Dana W; Tlatov, Andrey G; Nagovitsyn, Yury A; Pevtsov, Alexei A; Chapman, Gary A; Cookson, Angela M; Yeates, Anthony R; Watson, Fraser T; Balmaceda, Laura A; DeLuca, Edward E; Martens, Petrus C H

    2014-01-01

    In this work we take advantage of eleven different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions -- where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) $10^{21}$Mx ($10^{22}$Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behaviour of a power-law distribution (when extended into smaller fluxes), making our results compatible with the results of Parnell et al.\\ (200...

  14. Object recognition for autonomous robot utilizing distributed knowledge database

    Science.gov (United States)

    Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji

    2003-10-01

    In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.

  15. Greening radio access networks using distributed base station architectures

    DEFF Research Database (Denmark)

    Kardaras, Georgios; Soler, José; Dittmann, Lars

    2010-01-01

    . However besides this, increasing energy efficiency represents a key factor for reducing operating expenses and deploying cost effective mobile networks. This paper presents how distributed base station architectures can contribute in greening radio access networks. More specifically, the advantages...... of introducing remote radio head modules are discussed. Substantial flexibility is provided in terms of power consumption, as a result of combining efficient hardware with intelligent software. Additionally, it is underlined that designing eco-sustainable systems needs to follow a holistic approach towards...

  16. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  17. Characterizing Journal Access at a Canadian University Using the Journal Citation Reports Database

    Directory of Open Access Journals (Sweden)

    Alan Gale

    2011-07-01

    Full Text Available This article outlines a simple approach to characterizing the level of access to the scholarly journal literature in the physical sciences and engineering offered by a research library, particularly within the Canadian university system. The method utilizes the “Journal Citation Reports” (JCR database to produce lists of journals, ranked based on total citations, in the subject areas of interest. Details of the approach are illustrated using data from the University of Guelph. The examples cover chemistry, physics, mathematics and statistics, as well as engineering. In assessing the level of access both the Library’s current journal subscriptions and backfiles are considered. To gain greater perspective, data from both 2003 and 2008 is analyzed. In addition, the number of document delivery requests, received from University of Guelph Library users in recent years, are also reviewed. The approach taken in characterizing access to the journal literature is found to be simple and easy to implement, but time consuming. The University of Guelph Library is shown to provide excellent access to the current journal literature in the subject areas examined. Access to the historical literature in those areas is also strong. In making these assessments, a broad and comprehensive array of journals is considered in each case. Document delivery traffic (i.e. Guelph requests is found to have decreased markedly in recent years. This is attributed, at least in part, to improving access to the scholarly literature. For the University of Guelph, collection assessment is an ongoing process that must balance the needs of a diverse group of users. The results of analyses of the kind discussed in this article can be of practical significance and value to that process.

  18. Quasi Serializable Concurrency Control in Distributed Real-Time Database Systems

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.

  19. A Distributed Algorithm for Determining Minimal Covers of Acyclic Database Schemes

    Institute of Scientific and Technical Information of China (English)

    叶新铭

    1994-01-01

    Acyclic databases possess several desirable properties for their design and use.A distributed algorithm is proposed for determining a minimal cover of an alpha-,beta-,gamma-,or Berge-acyclic database scheme over a set of attributes in a distributed environment.

  20. Genomics and Public Health Research: Can the State Allow Access to Genomic Databases?

    Directory of Open Access Journals (Sweden)

    M Stanton Jean

    2012-04-01

    Full Text Available Because many diseases are multifactorial disorders,the scientific progress in genomics and genetics should be taken into consideration in public health research. In this context, genomic databases will constitute an important source of information. Consequently, it is important to identify and characterize the State's role and authority on matters related to public health,in order to verify whether it has access to such databases while engaging in public health genomic research. We first consider the evolution of the concept of public health, as well as its core functions, using a comparative approach (e.g. WHO, PAHO, CDC and the Canadian province of Quebec. Following an analysis of relevant Quebec legislation, the precautionary principle is examined as a possible avenue to justify State access to and use of genomic databases for research purposes. Finally, we consider the Influenza pandemic plans developed by WHO, Canada, and Quebec,as examples of key tools framing public health decision-making process.We observed that State powers in public health, are not,in Quebec,well adapted to the expansion of genomics research.We propose that the scope of the concept of research in public health should be clear and include the following characteristics:a commitment to the health and well-being of the population and to their determinants; the inclusion of both applied research and basic research; and, an appropriate model of governance (authorization, follow-up,consent, etc..We also suggest that the strategic approach version of the precautionary principle could guide collective choices in these matters.

  1. Identifying unknown nanocrystals by fringe fingerprinting in two dimensions and free-access crystallographic databases

    Science.gov (United States)

    Moeck, Peter; Čertik, Ondřej; Seipel, Bjoern; Groebner, Rebecca; Noice, Lori; Upreti, Girish; Fraundorf, Philip; Erni, Rolf; Browning, Nigel D.; Kiesow, Andreas; Jolivet, Jean-Pierre

    2005-11-01

    New needs to determine the crystallography of nanocrystals arise with the advent of science and engineering on the nanometer scale. Direct space high-resolution phase-contrast transmission electron microscopy (HRTEM) and atomic resolution Z-contrast scanning TEM (Z-STEM), when combined with tools for image-based nanocrystallography possess the capacity to meet these needs. This paper introduces such a tool, i.e. fringe fingerprinting in two dimensions (2D), for the identification of unknown nanocrystal phases and compares this method briefly to qualitative standard powder X-ray diffractometry (i.e. spatial frequency fingerprinting). Free-access crystallographic databases are also discussed because the whole fingerprinting concept is only viable if there are comprehensive databases to support the identification of an unknown nanocrystal phase. This discussion provides the rationale for our ongoing development of a dedicated free-access Nano-Crystallography Database (NCD) that contains comprehensive information on both nanocrystal structures and morphologies. The current status of the NCD project and plans for its future developments are briefly outlined. Although feasible in contemporary HRTEMs and Z-STEMs, fringe fingerprinting in 2D (and image-based nanocrystallography in general) will become much more viable with the increased availability of aberration-corrected transmission electron microscopes. When the image acquisition and interpretation are, in addition, automated in such microscopes, fringe fingerprinting in 2D will be able to compete with powder X-ray diffraction for the identification of unknown nanocrystal phases on a routine basis. Since it possesses a range of advantages over powder X-ray diffractometry, e.g., fringe fingerprint plots contain much more information for the identification of an unknown crystal phase, fringe fingerprinting in 2D may then capture a significant part of the nanocrystal metrology market.

  2. Links in a distributed database: Theory and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides.

  3. Access数据库的安全性分析与策略%Access Database Security Analysis and Strategy

    Institute of Scientific and Technical Information of China (English)

    文小林

    2013-01-01

    Access是Microsoft公司在1994年发布的数据库管理系统,是一种功能强大的MIS系统开放工具。它具有界面友好、操作简单等优点,在中小型数据库应用系统开发方面应用比较广泛。随着Access数据库的广泛应用,其安全性非常重要,数据库安全性问题成为了信息安全的最大挑战。在网络环境下,Access数据库存在许多的安全漏洞,修补安全漏洞保证Access数据库系统的安全性成为了数据库安全研究的重要课题之一。该文对Access数据库进行了简介,描述了数据库安全定义及安全体系,主要针对Access数据库存在的安全性问题进行分析,提出了Access数据库安全性问题的解决策略,寄希望于对提高Access数据库的安全性有所裨益。%Access is Microsoft Corporation in the 1994 release of the database management system, is a powerful tool for MIS sys-tems open. It has a friendly interface, simple operation, etc., in the development of small and medium sized database applications used widely. With the extensive application of the Access database, its security is very important, database security issues of infor-mation security has become the biggest challenge. In the network environment, Access database there are many security vulnera-bilities, security patches Access database system to ensure the security of database security has become one of the important issues. In this paper, a brief introduction Access database, describing the definition of database security and safety systems, mainly for the Access database security issues exist to analyze proposed Access database security problems solving strategies, hopes to improve the security of Access database benefit.

  4. NASA's Astromaterials Database: Enabling Research Through Increased Access to Sample Data, Metadata and Imagery

    Science.gov (United States)

    Evans, Cindy; Todd, Nancy

    2014-01-01

    The Astromaterials Acquisition & Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. Today, the suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from Japan's Hayabusa mission, solar wind atoms collected during the Genesis mission, and space-exposed hardware from several missions. To support planetary science research on these samples, JSC's Astromaterials Curation Office hosts NASA's Astromaterials Curation digital repository and data access portal [http://curator.jsc.nasa.gov/], providing descriptions of the missions and collections, and critical information about each individual sample. Our office is designing and implementing several informatics initiatives to better serve the planetary research community. First, we are re-hosting the basic database framework by consolidating legacy databases for individual collections and providing a uniform access point for information (descriptions, imagery, classification) on all of our samples. Second, we continue to upgrade and host digital compendia that summarize and highlight published findings on the samples (e.g., lunar samples, meteorites from Mars). We host high resolution imagery of samples as it becomes available, including newly scanned images of historical prints from the Apollo missions. Finally we are creating plans to collect and provide new data, including 3D imagery, point cloud data, micro CT data, and external links to other data sets on selected samples. Together, these individual efforts will provide unprecedented digital access to NASA's Astromaterials, enabling preservation of the samples through more specific and targeted requests, and supporting new planetary science research and collaborations on the samples.

  5. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... been committed and completed, the execution has the consistency property. The above definition of the consistency property is not useful in distributed databases with relaxed ACID properties because such a database is almost always inconsistent. In the following, we will use the concept Consistency...

  6. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  7. Asking new questions with old data: The Centralized Open-Access Rehabilitation database for Stroke (SCOAR.

    Directory of Open Access Journals (Sweden)

    Keith Lohse

    2016-09-01

    Full Text Available Background: This paper introduces a tool for streamlining data integration in rehabilitation science, the Centralized Open-Access Rehabilitation database for Stroke (SCOAR, which allows researchers to quickly visualize relationships among variables, efficiently share data, generate hypotheses, and enhance clinical trial design. Methods: Bibliographic databases were searched according to inclusion criteria leaving 2,892 titles that were further screened to 514 manuscripts to be screened by full text, leaving 215 randomized controlled trials in the database (489 independent groups representing 12,847 patients. Demographic, methodological, and statistical data were extracted by independent coders and entered into SCOAR. Results: Trial data came from 114 locations in 27 different countries and represented patients with a wide range of ages, 62 yr 41; 85, (shown as median range and at various stages of recovery following their stroke, 141 d 1; 3372. There was considerable variation in the dose of therapy that patients received, 20 h 0; 221, over interventions of different durations, 28 d 10; 365. There was also a lack of common data elements (CDEs across trials, but this lack of CDEs was most pronounced for baseline assessments of patient impairment and severity of stroke. Conclusions: Data integration across hundreds of RCTs allows clinicians and researchers to quickly visualize data from the history of the field and lays the foundation for making SCOAR a living database to which researchers can upload new data as trial results are published. SCOAR is a useful tool for clinicians and researchers that will facilitate data visualization, data sharing, the finding of relevant past studies, and the design of clinical trials by enabling more accurate and comprehensive power analyses. Furthermore, these data speak to the need for CDEs specific to stroke rehabilitation in randomized controlled trials.PROSPERO# CRD4201409010

  8. PDTD: a web-accessible protein database for drug target identification

    Directory of Open Access Journals (Sweden)

    Gao Zhenting

    2008-02-01

    Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and

  9. A Distributed MAC Protocol for Cooperation in Random Access Networks

    CERN Document Server

    Böcherer, Georg

    2008-01-01

    WLAN is one of the most successful applications of wireless communications in daily life because of low cost and ease of deployment. The enabling technique for this success is the use of random access schemes for the wireless channel. Random access requires minimal coordination between the nodes, which considerably reduces the cost of the infrastructure. Recently, cooperative communication in wireless networks has been of increasing interest because it promises higher rates and reliability. An additional MAC overhead is necessary to coordinate the nodes to allow cooperation and this overhead can possibly cancel out the cooperative benefits. In this work, a completely distributed protocol is proposed that allows nodes in the network to cooperate via Two-Hop and Decode-and-Forward for transmitting their data to a common gateway node. It is shown that high throughput gains are obtained in terms of the individual throughput that can be guaranteed to any node in the network. These results are validated by Monte Ca...

  10. Proyecto AVIS: a Spanish open access bird database available for research

    Directory of Open Access Journals (Sweden)

    Sara Varela

    2014-12-01

    Full Text Available Proyecto AVIS1 is an open access citizen science database that stores information collected by amateur ornithologists about bird occurrences and abundance in Spain. Proyecto AVIS was launched in 2005 and today stores data from 415 species (ca. 90% of bird species in Spain; it covers 30% of the Spanish territory, including the Canary Islands in the Atlantic Ocean and the Balearic Islands in the Mediterranean Sea. Here, we acknowledge the work of all the volunteers that have gathered bird records in the field and uploaded these observations over the last 10 years, and introduce Proyecto AVIS to a broader community of biogeographers and macroecologists to promote its use for research. 

  11. The Zebrafish Neurophenome Database (ZND): a dynamic open-access resource for zebrafish neurophenotypic data.

    Science.gov (United States)

    Kyzar, Evan; Zapolsky, Ivan; Green, Jeremy; Gaikwad, Siddharth; Pham, Mimi; Collins, Christopher; Roth, Andrew; Stewart, Adam Michael; St-Pierre, Paul; Hirons, Budd; Kalueff, Allan V

    2012-03-01

    Zebrafish (Danio rerio) are widely used in neuroscience research, where their utility as a model organism is rapidly expanding. Low cost, ease of experimental manipulations, and sufficient behavioral complexity make zebrafish a valuable tool for high-throughput studies in biomedicine. To complement the available repositories for zebrafish genetic information, there is a growing need for the collection of zebrafish neurobehavioral and neurological phenotypes. For this, we are establishing the Zebrafish Neurophenome Database (ZND; www.tulane.edu/∼znpindex/search ) as a new dynamic online open-access data repository for behavioral and related physiological data. ZND, currently focusing on adult zebrafish, combines zebrafish neurophenotypic data with a simple, easily searchable user interface, which allow scientists to view and compare results obtained by other laboratories using various treatments in different testing paradigms. As a developing community effort, ZND is expected to foster innovative research using zebrafish by federating the growing body of zebrafish neurophenotypic data.

  12. SierraDNA – Demonstrating the Usefulness of Direct ILS Database Access

    Directory of Open Access Journals (Sweden)

    James Padgett

    2015-10-01

    Full Text Available Innovative Interface’s Sierra(™ Integrated Library System (ILS brings with it a Database Navigator Application (SierraDNA - in layman's terms SierraDNA gives Sierra sites read access to their ILS database. Unlike the closed use cases produced by vendor supplied APIs, which restrict Libraries to limited development opportunities, SierraDNA enables sites to publish their own APIs and scripts based upon custom SQL code to meet their own needs and those of their users and processes. In this article we give examples showing how SierraDNA can be utilized to improve Library services. We highlight three example use cases which have benefited our users, enhanced online security and improved our back office processes. In the first use case we employ user access data from our electronic resources proxy server (WAM to detect hacked user accounts. Three scripts are used in conjunction to flag user accounts which are being hijacked to systematically steal content from our electronic resource provider’s websites. In the second we utilize the reading histories of our users to augment our search experience with an Amazon style “People who borrowed this book also borrowed…these books” feature. Two scripts are used together to determine which other items were borrowed by borrowers of the item currently of interest. And lastly, we use item holds data to improve our acquisitions workflow through an automated demand based ordering process. Our explanation and SQL code should be of direct use for adoption or as examples for other Sierra customers willing to exploit their ILS data in similar ways, but the principles may also be useful to non-Sierra sites that also wish to enhancement security, user services or improve back office processes.

  13. MetaboLights: An Open-Access Database Repository for Metabolomics Data.

    Science.gov (United States)

    Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph

    2016-03-24

    MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.

  14. The Mouse Genome Database: integration of and access to knowledge about the laboratory mouse.

    Science.gov (United States)

    Blake, Judith A; Bult, Carol J; Eppig, Janan T; Kadin, James A; Richardson, Joel E

    2014-01-01

    The Mouse Genome Database (MGD) (http://www.informatics.jax.org) is the community model organism database resource for the laboratory mouse, a premier animal model for the study of genetic and genomic systems relevant to human biology and disease. MGD maintains a comprehensive catalog of genes, functional RNAs and other genome features as well as heritable phenotypes and quantitative trait loci. The genome feature catalog is generated by the integration of computational and manual genome annotations generated by NCBI, Ensembl and Vega/HAVANA. MGD curates and maintains the comprehensive listing of functional annotations for mouse genes using the Gene Ontology, and MGD curates and integrates comprehensive phenotype annotations including associations of mouse models with human diseases. Recent improvements include integration of the latest mouse genome build (GRCm38), improved access to comparative and functional annotations for mouse genes with expanded representation of comparative vertebrate genomes and new loads of phenotype data from high-throughput phenotyping projects. All MGD resources are freely available to the research community.

  15. Dynamic Real Time Distributed Sensor Network Based Database Management System Using XML, JAVA and PHP Technologies

    Directory of Open Access Journals (Sweden)

    D. Sudharsan

    2012-03-01

    Full Text Available Wireless Sensor Network (WSN is well known for distributed real time systems for various applications. In order to handle the increasing functionality and complexity of high resolution spatio-temporal sensorydatabase, there is a strong need for a system/tool to analyse real time data associated with distributed sensor network systems. There are a few package/systems available to maintain the near real time database system/management, which are expensive and requires expertise. Hence, there is a need for a cost effective and easy to use dynamic real-time data repository system to provide real time data (raw as well as usable units in a structured format. In the present study, a distributed sensor network system, with Agrisens (AS and FieldServer (FS as well as FS-based Flux Tower and FieldTwitter, is used, which consists of network of sensors and field images to observe/collect the real time weather, crop and environmental parameters for precision agriculture. The real time FieldServer-based spatio-temporal high resolution dynamic sensory data was converted into Dynamic Real-Time Database Management System (DRTDBMS in a structured format for both raw and converted (with usable units data. A web interface has been developed to access the DRTDBMS and exclusive domain has been created with the help of open/free Information and Communication Technology (ICT tools in Extendable Markup Language (XML using (Hypertext preprocessor PHP algorithms and with eXtensible Hyper Text Markup Language (XHTML self-scripting. The proposed DRTDBMS prototype, called GeoSense DRTDBMS, which is a part of the ongoing IndoJapan initiative ‘ICT and Sensor Network based Decision Support Systems in Agriculture and EnvironmentAssessment’, will be integrated with GeoSense cloud server to provide database (dynamic real-time weather/soil/crop and environmental parameters and modeling services (crop water requirement and simulated rice yield modeling. GeoSense-cloud server

  16. Open access database of raw ultrasonic signals acquired from malignant and benign breast lesions.

    Science.gov (United States)

    Piotrzkowska-Wróblewska, Hanna; Dobruch-Sobczak, Katarzyna; Byra, Michał; Nowicki, Andrzej

    2017-08-31

    The aim of this paper is to provide access to a database consisting of the raw radio-frequency ultrasonic echoes acquired from malignant and benign breast lesions. The database is freely available for study and signal analysis. The ultrasonic radio-frequency echoes were recorded from breast focal lesions of patients of the Institute of Oncology in Warsaw. The data were collected between 11/2013 and 10/2015. Patients were examined by a radiologist with 18 yr' experience in the ultrasonic examination of breast lesions. The set of data includes scans from 52 malignant and 48 benign breast lesions recorded in a group of 78 women. For each lesion, two individual orthogonal scans from the pathological region were acquired with the Ultrasonix SonixTouch Research ultrasound scanner using the L14-5/38 linear array transducer. All malignant lesions were histologically assessed by core needle biopsy. In the case of benign lesions, part of them was histologically assessed and another part was observed over a 2-year period. The radio-frequency echoes were stored in Matlab file format. For each scan, the region of interest was provided to correctly indicate the lesion area. Moreover, for each lesion, the BI-RADS category and the lesion class were included. Two code examples of data manipulation are presented. The data can be downloaded via the Zenodo repository (https://doi.org/10.5281/zenodo.545928) or the website http://bluebox.ippt.gov.pl/~hpiotrzk. The database can be used to test quantitative ultrasound techniques and ultrasound image processing algorithms, or to develop computer-aided diagnosis systems. © 2017 American Association of Physicists in Medicine.

  17. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    Science.gov (United States)

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  18. Distributed cloud association in downlink multicloud radio access networks

    KAUST Repository

    Dahrouj, Hayssam

    2015-03-01

    This paper considers a multicloud radio access network (M-CRAN), wherein each cloud serves a cluster of base-stations (BS\\'s) which are connected to the clouds through high capacity digital links. The network comprises several remote users, where each user can be connected to one (and only one) cloud. This paper studies the user-to-cloud-assignment problem by maximizing a network-wide utility subject to practical cloud connectivity constraints. The paper solves the problem by using an auction-based iterative algorithm, which can be implemented in a distributed fashion through a reasonable exchange of information between the clouds. The paper further proposes a centralized heuristic algorithm, with low computational complexity. Simulations results show that the proposed algorithms provide appreciable performance improvements as compared to the conventional cloud-less assignment solutions. © 2015 IEEE.

  19. Greening radio access networks using distributed base station architectures

    DEFF Research Database (Denmark)

    Kardaras, Georgios; Soler, José; Dittmann, Lars

    2010-01-01

    Several actions for developing environmentally friendly technologies have been taken in most industrial fields. Significant resources have also been devoted in mobile communications industry. Moving towards eco-friendly alternatives is primarily a social responsibility for network operators....... However besides this, increasing energy efficiency represents a key factor for reducing operating expenses and deploying cost effective mobile networks. This paper presents how distributed base station architectures can contribute in greening radio access networks. More specifically, the advantages...... of introducing remote radio head modules are discussed. Substantial flexibility is provided in terms of power consumption, as a result of combining efficient hardware with intelligent software. Additionally, it is underlined that designing eco-sustainable systems needs to follow a holistic approach towards...

  20. Scheduling transactions in mobile distributed real-time database systems

    Institute of Scientific and Technical Information of China (English)

    LEI Xiang-dong; ZHAO Yue-long; CHEN Song-qiao; YUAN Xiao-li

    2008-01-01

    A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions.Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate,commit rate. Under high work load (think time is 1s) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.

  1. The NASA ADS Abstract Service and the Distributed Astronomy Digital Library [and] Project Soup: Comparing Evaluations of Digital Collection Efforts [and] Cross-Organizational Access Management: A Digital Library Authentication and Authorization Architecture [and] BibRelEx: Exploring Bibliographic Databases by Visualization of Annotated Content-based Relations [and] Semantics-Sensitive Retrieval for Digital Picture Libraries [and] Encoded Archival Description: An Introduction and Overview.

    Science.gov (United States)

    Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.

    1999-01-01

    Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…

  2. A method to implement fine-grained access control for personal health records through standard relational database queries.

    Science.gov (United States)

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials.

  3. Harmful algal bloom historical database from Coastal waters of Florida from 01 November 1995 to 09 September 1996 (NODC Accession 0019216)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In the later part of 1999, a relational Microsoft Access database was created to accommodate a wide range of data on the phytoplankton Karenia brevis. This database,...

  4. A distributed computing tool for generating neural simulation databases.

    Science.gov (United States)

    Calin-Jageman, Robert J; Katz, Paul S

    2006-12-01

    After developing a model neuron or network, it is important to systematically explore its behavior across a wide range of parameter values or experimental conditions, or both. However, compiling a very large set of simulation runs is challenging because it typically requires both access to and expertise with high-performance computing facilities. To lower the barrier for large-scale model analysis, we have developed NeuronPM, a client/server application that creates a "screen-saver" cluster for running simulations in NEURON (Hines & Carnevale, 1997). NeuronPM provides a user-friendly way to use existing computing resources to catalog the performance of a neural simulation across a wide range of parameter values and experimental conditions. The NeuronPM client is a Windows-based screen saver, and the NeuronPM server can be hosted on any Apache/PHP/MySQL server. During idle time, the client retrieves model files and work assignments from the server, invokes NEURON to run the simulation, and returns results to the server. Administrative panels make it simple to upload model files, define the parameters and conditions to vary, and then monitor client status and work progress. NeuronPM is open-source freeware and is available for download at http://neuronpm.homeip.net . It is a useful entry-level tool for systematically analyzing complex neuron and network simulations.

  5. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases.

    Science.gov (United States)

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases

  6. Sociospatial distribution of access to facilities for moderate and vigorous intensity physical activity in Scotland by different modes of transport

    Directory of Open Access Journals (Sweden)

    Lamb Karen E

    2012-07-01

    Full Text Available Abstract Background People living in neighbourhoods of lower socioeconomic status have been shown to have higher rates of obesity and a lower likelihood of meeting physical activity recommendations than their more affluent counterparts. This study examines the sociospatial distribution of access to facilities for moderate or vigorous intensity physical activity in Scotland and whether such access differs by the mode of transport available and by Urban Rural Classification. Methods A database of all fixed physical activity facilities was obtained from the national agency for sport in Scotland. Facilities were categorised into light, moderate and vigorous intensity activity groupings before being mapped. Transport networks were created to assess the number of each type of facility accessible from the population weighted centroid of each small area in Scotland on foot, by bicycle, by car and by bus. Multilevel modelling was used to investigate the distribution of the number of accessible facilities by small area deprivation within urban, small town and rural areas separately, adjusting for population size and local authority. Results Prior to adjustment for Urban Rural Classification and local authority, the median number of accessible facilities for moderate or vigorous intensity activity increased with increasing deprivation from the most affluent or second most affluent quintile to the most deprived for all modes of transport. However, after adjustment, the modelling results suggest that those in more affluent areas have significantly higher access to moderate and vigorous intensity facilities by car than those living in more deprived areas. Conclusions The sociospatial distributions of access to facilities for both moderate intensity and vigorous intensity physical activity were similar. However, the results suggest that those living in the most affluent neighbourhoods have poorer access to facilities of either type that can be reached on foot

  7. Distributed Medium Access Control with SDMA Support for WLANs

    Science.gov (United States)

    Zhou, Sheng; Niu, Zhisheng

    With simultaneous multi-user transmissions, spatial division multiple access (SDMA) provides substantial throughput gain over the single user transmission. However, its implementation in WLANs with contention-based IEEE 802.11 MAC remains challenging. Problems such as coordinating and synchronizing the multiple users need to be solved in a distributed way. In this paper, we propose a distributed MAC protocol for WLANs with SDMA support. A dual-mode CTS responding mechanism is designed to accomplish the channel estimation and user synchronization required for SDMA. We analytically study the throughput performance of the proposed MAC, and dynamic parameter adjustment is designed to enhance the protocol efficiency. In addition, the proposed MAC protocol does not rely on specific physical layer realizations, and can work on legacy IEEE 802.11 equipment with slight software updates. Simulation results show that the proposed MAC outperforms IEEE 802.11 significantly, and that the dynamic parameter adjustment can effectively track the load variation in the network.

  8. Microsoft Access Small Business Solutions State-of-the-Art Database Models for Sales, Marketing, Customer Management, and More Key Business Activities

    CERN Document Server

    Hennig, Teresa; Linson, Larry; Purvis, Leigh; Spaulding, Brent

    2010-01-01

    Database models developed by a team of leading Microsoft Access MVPs that provide ready-to-use solutions for sales, marketing, customer management and other key business activities for most small businesses. As the most popular relational database in the world, Microsoft Access is widely used by small business owners. This book responds to the growing need for resources that help business managers and end users design and build effective Access database solutions for specific business functions. Coverage includes::; Elements of a Microsoft Access Database; Relational Data Model; Dealing with C

  9. Weak Serializable Concurrency Control in Distributed Real-Time Database Systems

    Institute of Scientific and Technical Information of China (English)

    党德鹏; 刘云生; 等

    2002-01-01

    Most of the proposed concurrency control protocols for real-time database systems are based on serializability theorem.Owing to the unique characteristics of real-time database applications and the importance of satisfying the timing constraints of transactions,serializability is too strong as a correctness criterion and not suitable for real-time databases in most cases.On the other hand,relaxed serializability including epsilon-serializability and similarity-serializability can allow more real-time transactions to satisfy their timing constraints,but database consistency may be sacrificed to some extent.We thus propose the use of weak serializability(WSR)that is more relaxed than conflicting serializability while database consistency is maintained.In this paper,we first formally define the new notion of correctness called weak serializability.After the necessary and sufficient conditions for weak serializability are shown,corresponding concurrency control protocol WDHP(weak serializable distributed high prority protocol)is outlined for distributed real time databases,where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency.Finally,through a series of simulation studies,it is shown that using the new concurrency control protocol the performance of distributed realtime databases can be greatly improved.

  10. Unmasking Outliers in Large Distributed Databases Using Cluster Based Approach: CluBSOLD

    Directory of Open Access Journals (Sweden)

    A. Rama Satish

    2016-04-01

    Full Text Available Outliers are dissimilar or inconsistent data objects with respect to the remaining data objects in the data set or which are far away from their cluster centroids. Detecting outliers in data is a very important concept in Knowledge Data Discovery process for finding hidden knowledge. The task of detecting the outliers has been studied in a large number of research areas like Financial Data Analysis, Large Distributed Systems, Biological Data Analysis, Data Mining, Scientific Applications, Health monitoring, etc., Existing research study of outlier detection shows that Density Based outlier detection techniques are robust. Identifying outliers in a distributed environment is not a simple task because processing with a distributed database raises two major issues. First one is rendering massive data which are generated from different databases. And the second is data integration, which may cause data security violation and sensitive information leakage. Handling distributed database is a difficult task. In this paper, we present a cluster based outliers detection to spot outliers in large and vibrant (updated dynamically distributed database in which cell density based centralized detection is used to succeed in dealing with massive data rendering problem and data integration problem. Experiments are conducted on various datasets and the obtained results clearly shows the robustness of the proposed technique forv finding outliers in large distributed database.

  11. JASPAR 2016: a major expansion and update of the open-access database of transcription factor binding profiles

    DEFF Research Database (Denmark)

    Mathelier, Anthony; Fornes, Oriol; Arenillas, David J;

    2016-01-01

    JASPAR (http://jaspar.genereg.net) is an open-access database storing curated, non-redundant transcription factor (TF) binding profiles representing transcription factor binding preferences as position frequency matrices for multiple species in six taxonomic groups. For this 2016 release, we...

  12. MetIDB: A Publicly Accessible Database of Predicted and Experimental 1H NMR Spectra of Flavonoids

    NARCIS (Netherlands)

    Mihaleva, V.V.; Beek, te T.A.; Zimmeren, van F.; Moco, S.I.A.; Laatikainen, R.; Niemitz, M.; Korhonen, S.P.; Driel, van M.A.; Vervoort, J.

    2013-01-01

    Identification of natural compounds, especially secondary metabolites, has been hampered by the lack of easy to use and accessible reference databases. Nuclear magnetic resonance (NMR) spectroscopy is the most selective technique for identification of unknown metabolites. High quality 1H NMR (proton

  13. JASPAR, the open access database of transcription factor-binding profiles: new content and tools in the 2008 update

    DEFF Research Database (Denmark)

    Bryne, J.C.; Valen, E.; Tang, M.H.E.

    2008-01-01

    JASPAR is a popular open-access database for matrix models describing DNA-binding preferences for transcription factors and other DNA patterns. With its third major release, JASPAR has been expanded and equipped with additional functions aimed at both casual and power users. The heart of the JASPAR...

  14. Editorial and scientific quality in the parameters for inclusion of journals commercial and open access databases

    Directory of Open Access Journals (Sweden)

    Cecilia Rozemblum

    2015-04-01

    Full Text Available In this article, the parameters used by RedALyC, Catalogo Latindex, SciELO, Scopus and Web of Science for the incorporation of scientific journals in their collections are analyzed with the goal of proving their relation with the objectives of each database in addition of debating the valuation that the scientific society is giving to those systems as decisive of "scientific quality". The used indicators are classified in: 1 Editorial quality (formal aspects or editorial management. 2 Content quality (peer review or originality and 3 Visibility (prestige of editors and editorial use and impact, accessibility and indexing It is revealed that: a between 9 and 16% of the indicators are related to the quality of content; b Lack specificity in their definition and determination of measure systems, and c match the goals of each base, although a marked trend towards formal aspects related and visibility is observed. Thus makes it clear that these systems pursuing their own objectives, making a core of journals of “quality” for its readership. We conclude, therefore, that the presence or absence of a journal in these collections is not sufficient to determine the quality of scientific magazine and its contents parameter.

  15. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  16. Database Selection for Processing k Nearest Neighbors Queries in Distributed Environments.

    Science.gov (United States)

    Yu, Clement; Sharma, Prasoon; Meng, Weiyi; Qin, Yan

    This paper considers the processing of digital library queries, consisting of a text component and a structured component in distributed environments. The paper concentrates on the processing of the structured component of a distributed query. A method is proposed to identify the databases that are likely to be useful for processing any given…

  17. A Permutation Gigantic Issues in Mobile Real Time Distributed Database : Consistency & Security

    Directory of Open Access Journals (Sweden)

    Gyanendra Kr. Gupta

    2011-02-01

    Full Text Available Several shape of Information System are broadly used in a variety of System Models. With the rapid development of computer network, Information System users concern more about data sharing in networks. In conventional relational database, data consistency was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but can not update it. If the traditional consistency control method has been used yet, the system’s concurrency will be inadequately influenced. So there are many new necessities for the consistency control and security in Mobile Real Time Distributed Database (MRTDDB. The problem not limited only to type of data (e.g. mobile or real-time databases. There are many aspects of data consistency problems in MRTDDB, such as inconsistency between characteristic and type of data; the nconsistency of topological relations after objects has been modified. In this paper, many cases of consistency are discussed. As the mobile computing becomes well-liked and the database grows with information sharing security is a big issue for researchers. Mutually both Consistency and Security of data is a big confront for esearchers because whenever the data is not consistent and secure no maneuver on the data (e.g. transaction is productive. It becomes more and more crucial when the transactions are used in on-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for consistency and security of databases. Traditional Database Security has focused primarily on creating user accounts and managing user rights to database objects. But in the mobility and drifting computing uses this database creating a new prospect for research. The wide spread use of databases over the web, heterogeneous client-server architectures,application servers, and networks creates a critical need to

  18. Information access, income distribution, and the Environmental Kuznets Curve

    Energy Technology Data Exchange (ETDEWEB)

    Bimonte, Salvatore [Department of Political Economy, University of Siena, Piazza S. Francesco 7, 53100 Siena (Italy)

    2002-04-01

    Recent empirical studies have tested the hypothesis of an Environmental Kuznets Curve (EKC) focusing primarily on the relationship between per capita income and certain types of pollutant emissions. Given the stock-nature of many pollution problems, emissions only partially account for the environmental impacts. Moreover, almost all of the studies have given consideration to little more than income levels as explanatory variables. This paper empirically tests the hypothesis of the EKC existence for a stock-sensitive indicator, that is, the percentage of protected area (PA) within national territory. It does theorize that economic growth is a necessary condition in order to better address environmental issues. But it also stresses that other variables (income distribution, education, information accessibility) may play a fundamental role in determining environmental quality. Contrary to other studies that mainly focus on the calculation of the income level corresponding to the transition point, this paper is more concerned with the calculation of environmental quality corresponding to that transition point, that is, the minimum level of environmental quality that a country is willing to accept. This paper highlights the idea that if the transition point is determined primarily by income level, social policies determine the level of environmental quality corresponding to that point.

  19. Access 数据库的完整性控制策略%Integrity Checking in Database of MS Access

    Institute of Scientific and Technical Information of China (English)

    姚一红

    2009-01-01

    Microsoft Access for Windows是Microsoft公司推出的面向办公自动化、功能强大的关系数据库管理系统.文中讨论了在Ac-cess中的完整性控制策略,并举出了几个实际操作的例子.

  20. Consistency and Security in Mobile Real Time Distributed Database (MRTDDB): A Combinational Giant Challenge

    Science.gov (United States)

    Gupta, Gyanendra Kr.; Sharma, A. K.; Swaroop, Vishnu

    2010-11-01

    Many type of Information System are widely used in various fields. With the hasty development of computer network, Information System users care more about data sharing in networks. In traditional relational database, data consistency was controlled by consistency control mechanism when a data object is locked in a sharing mode, other transactions can only read it, but can not update it. If the traditional consistency control method has been used yet, the system's concurrency will be inadequately influenced. So there are many new necessities for the consistency control and security in MRTDDB. The problem not limited only to type of data (e.g. mobile or real-time databases). There are many aspects of data consistency problems in MRTDDB, such as inconsistency between attribute and type of data; the inconsistency of topological relations after objects has been modified. In this paper, many cases of consistency are discussed. As the mobile computing becomes well liked and the database grows with information sharing security is a big issue for researchers. Consistency and Security of data is a big challenge for researchers because when ever the data is not consistent and secure no maneuver on the data (e.g. transaction) is productive. It becomes more and more crucial when the transactions are used in non-traditional environment like Mobile, Distributed, Real Time and Multimedia databases. In this paper we raise the different aspects and analyze the available solution for consistency and security of databases. Traditional Database Security has focused primarily on creating user accounts and managing user privileges to database objects. But in the mobility and nomadic computing uses these database creating a new opportunities for research. The wide spread use of databases over the web, heterogeneous client-server architectures, application servers, and networks creates a critical need to amplify this focus. In this paper we also discuss an overview of the new and old

  1. New methodology of solar radiation evaluation using free access databases in specific locations

    Energy Technology Data Exchange (ETDEWEB)

    Pagola, Inigo; Gaston, Martin [CENER (National Renewable Energy Centre), Ciudad de la Innovacion 7, Sarriguren 31621 (Navarre) (Spain); Fernandez-Peruchena, Carlos [CENER (National Renewable Energy Centre), Pabellon de Italia, Isaac Newton 4 5 SO, 41092 Sevilla (Spain); Moreno, Sara [AICIA Pabellon de Italia, Isaac Newton 4 5 SO, Sevilla 41092 Sevilla (Spain); Ramirez, Lourdes [CENER (National Renewable Energy Centre), Urbanizacion La Florida, Somera 7-9 1D, 28023 Madrid (Spain)

    2010-12-15

    In this paper, solar radiation obtained from different frequently used databases is compared in some different locations. In the analyzed databases, the data come from ground measurement networks, or from different models and with different resolutions. The proposed methodology assumes the hypothesis that the uncertainty of the databases is approximately the same as the meteorological uncertainty of the location. Therefore the heterogeneity of the observations is due to different observations. A weighted average is proposed taking into account different time and spatial characteristics of each database, and the estimation of standard deviation of weighted observations that derives the meteorological variability expected. (author)

  2. A Framework for Federated Two-Factor Authentication Enabling Cost-Effective Secure Access to Distributed Cyberinfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Ezell, Matthew A [ORNL; Rogers, Gary L [University of Tennessee, Knoxville (UTK); Peterson, Gregory D. [University of Tennessee, Knoxville (UTK)

    2012-01-01

    As cyber attacks become increasingly sophisticated, the security measures used to mitigate the risks must also increase in sophistication. One time password (OTP) systems provide strong authentication because security credentials are not reusable, thus thwarting credential replay attacks. The credential changes regularly, making brute-force attacks significantly more difficult. In high performance computing, end users may require access to resources housed at several different service provider locations. The ability to share a strong token between multiple computing resources reduces cost and complexity. The National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) provides access to digital resources, including supercomputers, data resources, and software tools. XSEDE will offer centralized strong authentication for services amongst service providers that leverage their own user databases and security profiles. This work implements a scalable framework built on standards to provide federated secure access to distributed cyberinfrastructure.

  3. H2O: An Autonomic, Resource-Aware Distributed Database System

    CERN Document Server

    Macdonald, Angus; Kirby, Graham

    2010-01-01

    This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.

  4. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    CERN Document Server

    Bottigli, U; Cheran, S C; Delogu, P; Fantacci, M E; Fauci, F; Golosio, B; Lauria, A; Torres, E L; Magro, R; Masala, G L; Oliva, P; Palmiero, R; Raso, G; Retico, A; Stumbo, S; Tangaro, S

    2004-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used for acquire new images, as archive and to perform statistical analysis. The images are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database....

  5. Security Study of Access XP Database%Access XP数据库的安全性研究

    Institute of Scientific and Technical Information of China (English)

    李海燕

    2005-01-01

    在信息化时代来临、Internet高速发展的今天,信息资源的经济和社会价值越来越明显,而数据库系统所管理、存储的数据已成为各个部门宝贵的信息资源,所以其安全保密性是目前所迫切需要解决的问题.笔者从数据库的安全性、保密性的角度,详细地阐述了Microsoft Access XP数据库的安全机制;探讨了实现Microsoft Access XP数据库不同保护级别的各种方法;还讨论了共享级安全性和用户级安全性问题.

  6. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    Science.gov (United States)

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological databases has increased significantly, with the continuous growth in the number of users and rate of queries; such that some databases have reached the terabyte size. There is therefore, the increasing need to access databases at the fastest rates possible. In this paper, the decision tree indexing model (PDTIM) was parallelised, using a hybrid of distributed and shared memory on resident database; with horizontal and vertical growth through Message Passing Interface (MPI) and POSIX Thread (PThread), to accelerate the index building time. The PDTIM was implemented using 1, 2, 4 and 5 processors on 1, 2, 3 and 4 threads respectively. The results show that the hybrid technique improved the speedup, compared to a sequential version. It could be concluded from results that the proposed PDTIM is appropriate for large data sets, in terms of index building time.

  7. Generation and analysis of a 29,745 unique Expressed Sequence Tags from the Pacific oyster (Crassostrea gigas assembled into a publicly accessible database: the GigasDatabase

    Directory of Open Access Journals (Sweden)

    Klopp Christophe

    2009-07-01

    Full Text Available Abstract Background Although bivalves are among the most-studied marine organisms because of their ecological role and economic importance, very little information is available on the genome sequences of oyster species. This report documents three large-scale cDNA sequencing projects for the Pacific oyster Crassostrea gigas initiated to provide a large number of expressed sequence tags that were subsequently compiled in a publicly accessible database. This resource allowed for the identification of a large number of transcripts and provides valuable information for ongoing investigations of tissue-specific and stimulus-dependant gene expression patterns. These data are crucial for constructing comprehensive DNA microarrays, identifying single nucleotide polymorphisms and microsatellites in coding regions, and for identifying genes when the entire genome sequence of C. gigas becomes available. Description In the present paper, we report the production of 40,845 high-quality ESTs that identify 29,745 unique transcribed sequences consisting of 7,940 contigs and 21,805 singletons. All of these new sequences, together with existing public sequence data, have been compiled into a publicly-available Website http://public-contigbrowser.sigenae.org:9090/Crassostrea_gigas/index.html. Approximately 43% of the unique ESTs had significant matches against the SwissProt database and 27% were annotated using Gene Ontology terms. In addition, we identified a total of 208 in silico microsatellites from the ESTs, with 173 having sufficient flanking sequence for primer design. We also identified a total of 7,530 putative in silico, single-nucleotide polymorphisms using existing and newly-generated EST resources for the Pacific oyster. Conclusion A publicly-available database has been populated with 29,745 unique sequences for the Pacific oyster Crassostrea gigas. The database provides many tools to search cleaned and assembled ESTs. The user may input and submit

  8. Delaware Bay Database; Delaware Sea Grant College Program, 28 June 1988 (NODC Accession 8900151)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Delaware Bay database contains records of discrete quality observations, collected on 40 oceanographic cruises between May 1978 and October 1985. Each record...

  9. NODC Standard Product: World Ocean Database 1998 version 2 (5 disc set) (NODC Accession 0098461)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Since the first release of WOD98, the staff of the Ocean Climate Laboratory have performed additional quality control on the database. Version 2.0 also includes...

  10. NODC Standard Product: World Ocean Database 2001 (8 disc set) (NODC Accession 0000720)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Database 2001 (WOD01) is comprised of 8 CD-ROMs and contains in situ profile data such as temperature, salinity, nutrients, oxygen, chlorophyll, plankton...

  11. Accessing the SEED genome databases via Web services API: tools for programmers

    National Research Council Canada - National Science Library

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-01-01

    .... The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes...

  12. NODC Standard Product: World Ocean Database 2009 (2 disc set) (NCEI Accession 0094887)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Database 2009 (WOD09) is a collection of scientifically quality-controlled ocean profile and plankton data that includes measurements of temperature,...

  13. Mining The Data From Distributed Database Using An Improved Mining Algorithm

    CERN Document Server

    Renjit, J Arokia

    2010-01-01

    Association rule mining is an active data mining research area and most ARM algorithms cater to a centralized environment. Centralized data mining to discover useful patterns in distributed databases isn't always feasible because merging data sets from different sites incurs huge network communication costs. In this paper, an Improved algorithm based on good performance level for data mining is being proposed. In local sites, it runs the application based on the improved LMatrix algorithm, which is used to calculate local support counts. Local Site also finds a centre site to manage every message exchanged to obtain all globally frequent item sets. It also reduces the time of scan of partition database by using LMatrix which increases the performance of the algorithm. Therefore, the research is to develop a distributed algorithm for geographically distributed data sets that reduces communication costs, superior running efficiency, and stronger scalability than direct application of a sequential algorithm in d...

  14. New tools and methods for direct programmatic access to the dbSNP relational database.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  15. Visual Access to Visual Images: The UC Berkeley Image Database Project.

    Science.gov (United States)

    Besser, Howard

    1990-01-01

    Discusses the problem of access in managing image collections and describes a prototype system for the University of California Berkeley which would include the University Art Museum, Architectural Slide Library, Geography Department's Map Library and Lowie Museum of Anthropology photographs. The system combines an online public access catalog…

  16. Visual Access to Visual Images: The UC Berkeley Image Database Project.

    Science.gov (United States)

    Besser, Howard

    1990-01-01

    Discusses the problem of access in managing image collections and describes a prototype system for the University of California Berkeley which would include the University Art Museum, Architectural Slide Library, Geography Department's Map Library and Lowie Museum of Anthropology photographs. The system combines an online public access catalog…

  17. SymbioGenomesDB: a database for the integration and access to knowledge on host-symbiont relationships.

    Science.gov (United States)

    Reyes-Prieto, Mariana; Vargas-Chávez, Carlos; Latorre, Amparo; Moya, Andrés

    2015-01-01

    Symbiotic relationships occur naturally throughout the tree of life, either in a commensal, mutualistic or pathogenic manner. The genomes of multiple organisms involved in symbiosis are rapidly being sequenced and becoming available, especially those from the microbial world. Currently, there are numerous databases that offer information on specific organisms or models, but none offer a global understanding on relationships between organisms, their interactions and capabilities within their niche, as well as their role as part of a system, in this case, their role in symbiosis. We have developed the SymbioGenomesDB as a community database resource for laboratories which intend to investigate and use information on the genetics and the genomics of organisms involved in these relationships. The ultimate goal of SymbioGenomesDB is to host and support the growing and vast symbiotic-host relationship information, to uncover the genetic basis of such associations. SymbioGenomesDB maintains a comprehensive organization of information on genomes of symbionts from diverse hosts throughout the Tree of Life, including their sequences, their metadata and their genomic features. This catalog of relationships was generated using computational tools, custom R scripts and manual integration of data available in public literature. As a highly curated and comprehensive systems database, SymbioGenomesDB provides web access to all the information of symbiotic organisms, their features and links to the central database NCBI. Three different tools can be found within the database to explore symbiosis-related organisms, their genes and their genomes. Also, we offer an orthology search for one or multiple genes in one or multiple organisms within symbiotic relationships, and every table, graph and output file is downloadable and easy to parse for further analysis. The robust SymbioGenomesDB will be constantly updated to cope with all the data being generated and included in major

  18. 支持多模推荐的多层数据库优化访问技术%Multi Database Access Technology Optimization Support Multimode Recommendation

    Institute of Scientific and Technical Information of China (English)

    李晓东; 魏惠茹

    2015-01-01

    Multi temporal attribute database reconstruction is the key technology to solve the data association database ac-cess recommendation, text information classification data reconstruction method using the traditional temporal attribute re-construction, which cannot effectively meet the multi-mode data recommended in the database environment. This paper proposes a support multimode recommendation database multi temporal attribute reconstruction technology. Multi temporal data reconstruction data structure model of database, realization of adaptive threshold optimization in the reconstruction pro-cess, calculate the transmission nodes in the data distribution in the direct trust and indirect trust value for each value, mul-timode recommended diagram structure, using adaptive threshold average mutual information method to solve the multi tem-poral attribute database, database access optimization. The simulation results show that, using the method of analysis of da-tabase multi temporal attribute data structure, effective implementation of multimode database user recommendation, im-prove the performance of database access, improve data scheduling. The universality and accuracy are improved.%为解决数据库访问中的关联数据推荐问题,进行数据库的多层时态属性重构,提高数据库访问能力.传统的数据时态属性重构技术采用文本信息特征分类重构方法,无法有效满足多模数据推荐中的数据库访问环境.提出一种支持多模推荐的数据库多层时态属性重构优化访问技术.构建数据库的多层时态数据重构数据结构模型,在重构过程中进行实现自适应阈值寻优,计算各传输节点在数据分发中自身对对方的直接信任值和间接信任值,进行多模推荐关系图构造,采用平均互信息方法求解数据库多层时态属性的自适应阈值,对数据库访问节点的彼此行为进行监控,实现数据库访问优化.仿真结果表明,采用该方法能有

  19. A Secure Time-Stamp Based Concurrency Control Protocol For Distributed Databases

    Directory of Open Access Journals (Sweden)

    Shashi Bhushan

    2007-01-01

    Full Text Available In distributed database systems the global database is partitioned into a collection of local databases stored at different sites. In this era of growing technology and fast communication media, security has an important role to play. In this paper we presented a secure concurrency control protocol (SCCP based on the timestamp ordering, which provides concurrency control and maintains security. We also implemented SCCP and a comparison of SCCP is presented in three cases (High, Medium and Low security levels. In this experiment, It is observed that throughput of the system decreases as the security level of the transaction increases, i.e., there is tradeoff between the security level and the throughput of the system.

  20. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    Science.gov (United States)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  1. Government databases and public health research: facilitating access in the public interest.

    Science.gov (United States)

    Adams, Carolyn; Allen, Judy

    2014-06-01

    Access to datasets of personal health information held by government agencies is essential to support public health research and to promote evidence-based public health policy development. Privacy legislation in Australia allows the use and disclosure of such information for public health research. However, access is not always forthcoming in a timely manner and the decision-making process undertaken by government data custodians is not always transparent. Given the public benefit in research using these health information datasets, this article suggests that it is time to recognise a right of access for approved research and that the decisions, and decision-making processes, of government data custodians should be subject to increased scrutiny. The article concludes that researchers should have an avenue of external review where access to information has been denied or unduly delayed.

  2. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  3. An Internet-Accessible DNA Sequence Database for Identifying Fusaria from Human and Animal Infections

    Science.gov (United States)

    Because less than one-third of clinically relevant fusaria can be accurately identified to species level using phenotypic data (i.e., morphological species recognition), we constructed a three-locus DNA sequence database to facilitate molecular identification of the 69 Fusarium species associated wi...

  4. DOPA: GPU-based protein alignment using database and memory access optimizations

    NARCIS (Netherlands)

    Hasan, L.; Kentie, M.; Al-Ars, Z.

    2011-01-01

    Background Smith-Waterman (S-W) algorithm is an optimal sequence alignment method for biological databases, but its computational complexity makes it too slow for practical purposes. Heuristics based approximate methods like FASTA and BLAST provide faster solutions but at the cost of reduced accurac

  5. Freely accessible databases of commercial compounds for high- throughput virtual screenings.

    Science.gov (United States)

    Moura Barbosa, Arménio Jorge; Del Rio, Alberto

    2012-01-01

    In the last decades computer-aided drug design techniques have been successfully used to guide the selection of new hit compounds with biological activity. These methods, that include a broad range of chemoinformatic and computational chemistry algorithms, are still disciplines in full bloom. In particular, virtual screening procedures have celebrated a great popularity for the rapid and cost-effective assessment of large chemical libraries of commercial compounds. While the usage of in silico techniques promises an effective speed-up at the early-stage of the development of new active compounds, computational projects starting from scratch with raw chemical data are often associated with resource- and time-consuming preparation protocols, almost blunting the advantages of using these techniques. In order to help facing these difficulties, in the last years several chemoinformatic projects and tools have emerged in literature and have been useful in preparing curated databases of chemical compounds for high-throughput virtual screening purposes. The review will focus on the detailed analysis of free databases of commercial chemical compounds that are currently employed in virtual screening campaigns for drug design. The scope of this review is to compare such databases and suggest the reader on how and in which conditions the usage of these databases could be recommended.

  6. DOPA: GPU-based protein alignment using database and memory access optimizations

    NARCIS (Netherlands)

    Hasan, L.; Kentie, M.; Al-Ars, Z.

    2011-01-01

    Background Smith-Waterman (S-W) algorithm is an optimal sequence alignment method for biological databases, but its computational complexity makes it too slow for practical purposes. Heuristics based approximate methods like FASTA and BLAST provide faster solutions but at the cost of reduced accurac

  7. An internet-accessible DNA sequence database for identifying fusaria from human and animal infections

    NARCIS (Netherlands)

    O'Donnell, K.; Sutton, D.A.; Rinaldi, M.G.; Sarver, B.A.J.; Balajee, S.A.; Schroers, H.J.; Summerbell, R.C.; Robert, V.A.R.G.; Crous, P.W.; Zhang, N.; Aoki, T.; Jung, K.; Park, J.; Lee, Y.H.; Kang, S.; Park, B.; Geiser, D.M.

    2010-01-01

    Because less than one-third of clinically relevant fusaria can be accurately identified to species level using phenotypic data (i.e., morphological species recognition), we constructed a three-locus DNA sequence database to facilitate molecular identification of the 69 Fusarium species associated wi

  8. NODC Standard Product: World Ocean Database 1998 version 1 (5 disc set) (NODC Accession 0095340)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Ocean Database 1998 (WOD98) is comprised of five CD-ROMs containing profile and plankton/biomass data in compressed format. WOD98-01 through WOD98-04...

  9. Clustering-based fragmentation and data replication for flexible query answering in distributed databases

    OpenAIRE

    Wiese, Lena

    2014-01-01

    One feature of cloud storage systems is data fragmentation (or sharding) so that data can be distributed over multiple servers and subqueries can be run in parallel on the fragments. On the other hand, flexible query answering can enable a database system to find related information for a user whose original query cannot be answered exactly. Query generalization is a way to implement flexible query answering on the syntax level. In this paper we study a clustering-based fragmentat...

  10. A Survey of Concurrency Control Mechanisms for Centralized and Distributed Databases.

    Science.gov (United States)

    1981-02-01

    migration checking aproach the transaction migrates from one site to another during execution. At each site, the system checks whether the migrating...Kan79], employs a method to sequence the execution of transactions on the basis of synchronous counters, known as logical clocks. The algorithm...Synchronization Method for Duplicated Database Control", Proceedings of the First International Conference on Distributed Computing Systems, pp. 601-611

  11. Design and Implementation of the Digital Engineering Laboratory Distributed Database Management System.

    Science.gov (United States)

    1984-12-01

    COMMUNICATION CHIANN EL (a) DBMS DBMS DBMS 1 2 ... n COMMUNICATION COMMUNICATION COMMUNICATIN MODULE MODULE MODULE COMMUNICATION CHANNEL (b) DBMS DBMS DBMS 1...problems. Dawson of Mitre Corporation (2) discusses using distributed databases for a field-deployable, tactical air control system. The Worldwide...Military Command and Control System is *heavily dependent on networking capabilities, and in an article Coles of Mitre Corporation (1) discusses current

  12. Distributed Storage Codes Meet Multiple-Access Wiretap Channels

    CERN Document Server

    Papailiopoulos, Dimitris S

    2010-01-01

    We consider {\\it i)} the overhead minimization of maximum-distance separable (MDS) storage codes for the repair of a single failed node and {\\it ii)} the total secure degrees-of-freedom (S-DoF) maximization in a multiple-access compound wiretap channel. We show that the two problems are connected. Specifically, the overhead minimization for a single node failure of an {\\it optimal} MDS code, i.e. one that can achieve the information theoretic overhead minimum, is equivalent to maximizing the S-DoF in a multiple-access compound wiretap channel. Additionally, we show that maximizing the S-DoF in a multiple-access compound wiretap channel is equivalent to minimizing the overhead of an MDS code for the repair of a departed node. An optimal MDS code maps to a full S-DoF channel and a full S-DoF channel maps to an MDS code with minimum repair overhead for one failed node. We also state a general framework for code-to-channel and channel-to-code mappings and performance bounds between the two settings. The underlyin...

  13. Improved Integrity Constraints Checking in Distributed Databases by Exploiting Local Checking

    Institute of Scientific and Technical Information of China (English)

    Ali A.Alwan; Hamidah Ibrahim; Nur Izura Udzir

    2009-01-01

    Most of the previous studies concerning checking the integrity constraints in distributed database derive simplified forms of the initial integrity constraints with the sufficiency property, since the sufficient test is known to be cheaper than the complete test and its initial integrity constraint as it involves less data to be transferred across the network and can always be evaluated at the target site (single site). Their studies are limited as they depend strictly on the assumption that an update operation will be executed at a site where the relation specified in the update operation is located, which is not always true. Hence, the sufficient test, which is proven to be local test by previous study, is no longer appropriate. This paper proposes an approach to checking integrity constraints in a distributed database by utilizing as much as possible the local information stored at the target site. The proposed approach derives support tests as an alternative to the existing complete and sufficient tests proposed by previous researchers with the intention to increase the number of local checking regardless the location of the submitted update operation. Several analyses have been performed to evaluate the proposed approach, and the results show that support tests can benefit the distributed database, where local constraint checking can be achieved.

  14. Method for Secure Access to Oracle Database Based on Proxy Service%基于代理服务的Oracle数据库安全访问

    Institute of Scientific and Technical Information of China (English)

    褚孔统; 宋建宇; 王国强

    2012-01-01

    The network transmission protocol of Oracle database is analyzed. Then, a method for the secure access to the Oracle database using proxy service is proposed to meet the high-standard confidentiality requirements of specific industries. The proxy service is used to capture the access- ing requests of Oracle database, encrypt messsages for transmission, decode and re-direct them to the Oracle database. The method can be used to secure the access to the Oracle database.%分析了Oracle数据库网络传输协议,提出了使用代理服务解决特定行业高保密要求的Oracle数据库安全访问方法。通过截获Oracle数据库访问请求,对报文进行加密传输、解析并重定向,达到Oracle数据库的安全访问目的。

  15. ADO.NET数据访问体系结构研究%Architecture Research of Database Accessing on ADO.NET

    Institute of Scientific and Technical Information of China (English)

    詹发荣

    2009-01-01

    ADO.NET是基于.NET Framework平台下的数据访问技术.该文介绍了ADO.NET的数据访问工作机理,并对ADO.NET和ADO进行了简单比较,分析了ADO.NET框架下访问数据库的两类核心组件,最后给出了采用ADO.NET访问不同数据库需要引入相应的命名空间.%ADO.NET is a technology of database accessing based on .NET Framework, it introduces ADO.NET's database accessing prin-ciple, and it compares ADO.NET and ADO, Analyzing two core module of accessing database under ADO.NET Framework, Finally, ac-cess to different databases using ADO.NET a need to introduce the corresponding namespace.

  16. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    Science.gov (United States)

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  17. Editorial and scientific quality in the parameters for inclusion of journals commercial and open access databases

    OpenAIRE

    2015-01-01

    In this article, the parameters used by RedALyC, Catalogo Latindex, SciELO, Scopus and Web of Science for the incorporation of scientific journals in their collections are analyzed with the goal of proving their relation with the objectives of each database in addition of debating the valuation that the scientific society is giving to those systems as decisive of "scientific quality". The used indicators are classified in: 1) Editorial quality (formal aspects or editorial management). 2) Cont...

  18. Open-access evidence database of controlled trials and systematic reviews in youth mental health.

    Science.gov (United States)

    De Silva, Stefanie; Bailey, Alan P; Parker, Alexandra G; Montague, Alice E; Hetrick, Sarah E

    2017-05-10

    To present an update to an evidence-mapping project that consolidates the evidence base of interventions in youth mental health. To promote dissemination of this resource, the evidence map has been translated into a free online database (https://orygen.org.au/Campus/Expert-Network/Evidence-Finder or https://headspace.org.au/research-database/). Included studies are extensively indexed to facilitate searching. A systematic search for prevention and treatment studies in young people (mean age 6-25 years) is conducted annually using Embase, MEDLINE, PsycINFO and the Cochrane Library. Included studies are restricted to controlled trials and systematic reviews published since 1980. To date, 221 866 publications have been screened, of which 2680 have been included in the database. Updates are conducted annually. This shared resource can be utilized to substantially reduce the amount of time involved with conducting literature searches. It is designed to promote the uptake of evidence-based practice and facilitate research to address gaps in youth mental health. © 2017 John Wiley & Sons Australia, Ltd.

  19. Utilizing Multimedia Database Access: Teaching Strategies Using the iPad in the Dance Classroom

    Science.gov (United States)

    Ostashewski, Nathaniel; Reid, Doug; Ostashewski, Marcia

    2016-01-01

    This article presents action research that identified iPad tablet technology-supported teaching strategies in a dance classroom context. Dance classrooms use instructor-accessed music as a regular element of lessons, but video is both challenging and time-consuming to produce or display. The results of this study highlight how the Apple iPad…

  20. Utilizing Multimedia Database Access: Teaching Strategies Using the iPad in the Dance Classroom

    Science.gov (United States)

    Ostashewski, Nathaniel; Reid, Doug; Ostashewski, Marcia

    2016-01-01

    This article presents action research that identified iPad tablet technology-supported teaching strategies in a dance classroom context. Dance classrooms use instructor-accessed music as a regular element of lessons, but video is both challenging and time-consuming to produce or display. The results of this study highlight how the Apple iPad…

  1. Central Appalachian basin natural gas database: distribution, composition, and origin of natural gases

    Science.gov (United States)

    Román Colón, Yomayra A.; Ruppert, Leslie F.

    2015-01-01

    The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.

  2. Accessing the distribution of linearly polarized gluons in unpolarized hadrons

    NARCIS (Netherlands)

    Boer, Daniël; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian

    2011-01-01

    Gluons inside unpolarized hadrons can be linearly polarized provided they have a nonzero transverse momentum. The simplest and theoretically safest way to probe this distribution of linearly polarized gluons is through cos(2 phi) asymmetries in heavy quark pair or dijet production in electron-hadron

  3. A Distributed Intranet/Web Solution to Integrated Management of Access Networks

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this article, we describe the present situation of access network management, enumerate a few problems during the development of network management systems, then put forward a distributed Intranet/Web solution named iMAN to the integrated management of access networks, present its architecture and protocol stack, and describe its application in practice.

  4. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  5. Application of Optical Disc Databases and Related Technology to Public Access Settings

    Science.gov (United States)

    1992-03-01

    Librarian 5, no 6: 23. Nelson, Nancy Melin. 1991. CD-ROM growth: unleashing the potential. Library Journal 116, no. 2: 51-53. Nicholls, Paul Travis...1991. The impact of CD-ROM on online. Library Journal 116, no. 2: 61-62. Tenopir, Carol, and Ralph Neufang. 1991. CD-ROM, online and databases on...primer. PC Magazine, 17 December, 44. Zink, Steven D. 1990. Planning for the perils of CD-ROM. Library Journal 115, no. 2: 51-55. 211 INITIAL

  6. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  7. A Distributed Metadata Management, Data Discovery and Access System

    CERN Document Server

    Palanisamy, Giriprakash; Green, Jim; Wilson, Bruce

    2010-01-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, among other features. Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fa...

  8. An Overview of Secure Mining of Association Rules in Horizontally Distributed Databases

    Directory of Open Access Journals (Sweden)

    Sonal Patil

    2015-10-01

    Full Text Available In this paper, propose a protocol for secure mining of association rules in horizontally distributed databases. Now a day the current leading protocol is Kantarcioglu and Clifton. This protocol is based on the Fast Distributed Mining (FDM algorithm which is an unsecured distributed version of the Apriori algorithm. The main ingredients in this protocol are two novel secure multi-party algorithms 1. That computes the union of private subsets that each of the interacting players hold, and 2. Tests the inclusion of an element held by one player in a subset held by another. In this protocol offers enhanced privacy with respect to the other one. Differences in this protocol, it is simpler and is significantly more efficient in terms of communication rounds, communication cost and computational cost [1].

  9. 用ASP技术访问数据库%ACCESS DATABASE WITH ASP TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    陈万平; 马秀峰; 宁洪涛

    2001-01-01

    通过ASP与ADO(ActiveXDataObjects)的组合,建立提供数据库信息的网页内容,在网页画面上执行SQL语句,允许访问者在浏览器上查询、插入、更新、删除站点服务器的数据库信息.这样,网页设计者可从烦琐的CGI程序中解脱出来.%In this paper,the anutors present a new method of combing thetechnologies of ASP and ADO(ActiveX Data Objects)to build the webpages to provide database information, by which,the SQL can be performed in the webpages,and thereby visitors are per mitted to look up,insert,update or delete the database information saved in the web servers freely through their own browers.Thus,webpage designers do not have to rely on the complicated CGI programs as before.

  10. Convenience and Medical Patient Database Benefits and Elasticity for Accessibility Therapy in Different Locations

    Directory of Open Access Journals (Sweden)

    Bambang Eka Purnama

    2012-09-01

    Full Text Available When a patient comes to a hospital, clinic, physician practices or other clinics, the enrollment section will ask whether the patient in question had never come or not. If the patient in question said he had never come then the officer will ask you Medication Patient Identification Card (KiB, which will be used to search for patient records in question. In the conventional health care, then the officer will use a tracer to locate patient records at the storage warehouse in the form of stacks of paper. If a patient at a hospital is still a bit it will not be problematic, but if the patient sudha achieve large-scale number in the hundreds of thousands or even millions it will certainly cause problems. Database records are kept in hospital untapped to the maximum to be exchanged at another hospital when the patient arrives at another hospital for further treatment or research purposes. This study aims to produce a computerized model of inter Medical Information Systems Hospital. Facilitate the benefits of this research is in the medical records of patients get information, patient history properly stored in computerized medical records, patient data search can be found quicker resulting in faster unhandled The expected outcome of this research is rapidly tertanganinya patients coming to a clinic and when the patient comes to the clinic to another place then the patient's medical resume database and the analysis can be found immediately.

  11. Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

    Institute of Scientific and Technical Information of China (English)

    JacekBecla; IgorGaponenko

    2001-01-01

    The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.

  12. [Open access to academic scholarship as a public policy resource: a study of the Capes database on Brazilian theses and dissertations].

    Science.gov (United States)

    da Silva Rosa, Teresa; Carneiro, Maria José

    2010-12-01

    Access to scientific knowledge is a valuable resource than can inform and validate positions taken in formulating public policy. But access to this knowledge can be challenging, given the diversity and breadth of available scholarship. Communication between the fields of science and of politics requires the dissemination of scholarship and access to it. We conducted a study using an open-access search tool in order to map existent knowledge on a specific topic: agricultural contributions to the preservation of biodiversity. The present article offers a critical view of access to the information available through the Capes database on Brazilian theses and dissertations.

  13. Gaussian Approximation for the Wireless Multi-access Interference Distribution and Its Applications

    CERN Document Server

    Inaltekin, Hazer

    2012-01-01

    This paper investigates the problem of Gaussian approximation for the wireless multi-access interference distribution in large spatial wireless networks. First, a principled methodology is presented to establish rates of convergence of the multi-access interference distribution to a Gaussian distribution for general bounded and power-law decaying path-loss functions. The model is general enough to also include various random wireless channel dynamics such as fading and shadowing arising from multipath propagation and obstacles existing in the communication environment. It is shown that the wireless multi-access interference distribution converges to the Gaussian distribution with the same mean and variance at a rate $\\frac{1}{\\sqrt{\\lambda}}$, where $\\lambda>0$ is a parameter controlling the intensity of the planar (possibly non-stationary) Poisson point process generating node locations. An explicit expression for the scaling coefficient is obtained as a function of fading statistics and the path-loss functi...

  14. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    Science.gov (United States)

    2015-02-01

    data for project planning, design , and evaluation studies, including how to generate input files for numerical wave models. WaveNet employs a Google ...ERDC/CHL CHETN-IV-103 February 2015 Approved for public release; distribution is unlimited. WaveNet: A Web -Based Metocean Data Access, Processing...modeling and planning missions require metocean data (e.g., winds, waves, tides, water levels). WaveNet is a web -based graphical-user-interface (GUI

  15. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  16. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  17. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  18. Scheduling with Bus Access Optimization for Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Eles, Petru; Doboli, Alex; Pop, Paul;

    2000-01-01

    In this paper, we concentrate on aspects related to the synthesis of distributed embedded systems consisting of programmable processors and application-specific hardware components. The approach is based on an abstract graph representation that captures, at process level, both dataflow and the flow...... of control. Our goal is to derive a worst case delay by which the system completes execution, such that this delay is as small as possible; to generate a logically and temporally deterministic schedule; and to optimize parameters of the communication protocol such that this delay is guaranteed. We have...... have to be considered during scheduling but also the parameters of the communication protocol should be adapted to fit the particular embedded application. The optimization algorithm, which implies both process scheduling and optimization of the parameters related to the communication protocol...

  19. Experimental grid access for dynamic discovery and data transfer in distributed interactive simulation systems

    NARCIS (Netherlands)

    A. Tirado-Ramos; K. Zajac; Z. Zhao; P.M.A. Sloot; G.D. van Albada; M. Bubak

    2003-01-01

    Interactive Problem Solving Environments (PSEs) offer an integrated approach for constructing and running complex systems, such as distributed simulation systems. New distributed infrastructures, like the Grid, support the access to a large variety of core services and resources that can be used by

  20. Modeling the video distribution link in the Next Generation Optical Access Networks

    DEFF Research Database (Denmark)

    Amaya, F.; Cárdenas, A.; Tafur Monroy, Idelfonso

    2011-01-01

    In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we...

  1. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  2. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  3. RAINBIO: a mega-database of tropical African vascular plants distributions

    Science.gov (United States)

    Dauby, Gilles; Zaiss, Rainer; Blach-Overgaard, Anne; Catarino, Luís; Damen, Theo; Deblauwe, Vincent; Dessein, Steven; Dransfield, John; Droissart, Vincent; Duarte, Maria Cristina; Engledow, Henry; Fadeur, Geoffrey; Figueira, Rui; Gereau, Roy E.; Hardy, Olivier J.; Harris, David J.; de Heij, Janneke; Janssens, Steven; Klomberg, Yannick; Ley, Alexandra C.; Mackinder, Barbara A.; Meerts, Pierre; van de Poel, Jeike L.; Sonké, Bonaventure; Sosef, Marc S. M.; Stévart, Tariq; Stoffelen, Piet; Svenning, Jens-Christian; Sepulchre, Pierre; van der Burgt, Xander; Wieringa, Jan J.; Couvreur, Thomas L. P.

    2016-01-01

    Abstract The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species. PMID:28127234

  4. RAINBIO: a mega-database of tropical African vascular plants distributions

    Directory of Open Access Journals (Sweden)

    Dauby Gilles

    2016-11-01

    Full Text Available The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  5. RAINBIO: a mega-database of tropical African vascular plants distributions.

    Science.gov (United States)

    Dauby, Gilles; Zaiss, Rainer; Blach-Overgaard, Anne; Catarino, Luís; Damen, Theo; Deblauwe, Vincent; Dessein, Steven; Dransfield, John; Droissart, Vincent; Duarte, Maria Cristina; Engledow, Henry; Fadeur, Geoffrey; Figueira, Rui; Gereau, Roy E; Hardy, Olivier J; Harris, David J; de Heij, Janneke; Janssens, Steven; Klomberg, Yannick; Ley, Alexandra C; Mackinder, Barbara A; Meerts, Pierre; van de Poel, Jeike L; Sonké, Bonaventure; Sosef, Marc S M; Stévart, Tariq; Stoffelen, Piet; Svenning, Jens-Christian; Sepulchre, Pierre; van der Burgt, Xander; Wieringa, Jan J; Couvreur, Thomas L P

    2016-01-01

    The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species.

  6. IMGT/GeneInfo: enhancing V(D)J recombination database accessibility.

    Science.gov (United States)

    Baum, Thierry-Pascal; Pasqual, Nicolas; Thuderoz, Florence; Hierle, Vivien; Chaume, Denys; Lefranc, Marie-Paule; Jouvin-Marche, Evelyne; Marche, Patrice-Noël; Demongeot, Jacques

    2004-01-01

    IMGT/GeneInfo is a user-friendly online information system that provides information on data resulting from the complex mechanisms of immunoglobulin (IG) and T cell receptor (TR) V(D)J recombinations. For the first time, it is possible to visualize all the rearrangement parameters on a single page. IMGT/GeneInfo is part of the international ImMunoGeneTics information system (IMGT), a high-quality integrated knowledge resource specializing in IG, TR, major histocompatibility complex (MHC), and related proteins of the immune system of human and other vertebrate species. The IMGT/GeneInfo system was developed by the TIMC and ICH laboratories (with the collaboration of LIGM), and is the first example of an external system being incorporated into IMGT. In this paper, we report the first part of this work. IMGT/GeneInfo_TR deals with the human and mouse TRA/TRD and TRB loci of the TR. Data handling and visualization are complementary to the current data and tools in IMGT, and will subsequently allow the modelling of V(D)J gene use, and thus, to predict non-standard recombination profiles which may eventually be found in conditions such as leukaemias or lymphomas. Access to IMGT/GeneInfo is free and can be found at http://imgt.cines.fr/GeneInfo.

  7. Free access to INIS database provides a gateway to nuclear energy research results; INIS-tietokanta avaa paeaesyn ydinenergia-alan tutkimustuloksiin

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, E.; Malmgren, M. (Aalto Univ., Espoo (Finland). e-mail:eva.tolonen@tkk.fi; marja.malmgren@tkk.fi)

    2009-07-01

    Free access to INIS database was opened to all the Internet users around the world on May, 2009. The article reviews the history of INIS (the International Nuclear Information System), data aquisition process, database content and search possibilities. INIS is focused on the worldwide literature of the peaceful uses of nuclear energy and the database is produced in close collaboration with the IEA/ETDE World Energy Base (ETDEWEB), a database focusing on all aspects of energy. Nuclear Science Abstracts database (NSA), which is a comprehensive collection of international nuclear science and technology literature for the period 1948 through 1976, is also briefly discussed in the article. In Finland, the recently formed Aalto University is responsible for collecting and disseminating information (literature) and for the preparation of input to the INIS and IEA/ETDE Databases on the national level

  8. A Traffic Forecasting Method with Function to Control Residual Error Distribution for IP Access Networks

    Science.gov (United States)

    Kitahara, Takeshi; Furuya, Hiroki; Nakamura, Hajime

    Since traffic in IP access networks is less aggregated than in backbone networks, its variance could be significant and its distribution may be long-tailed rather than Gaussian in nature. Such characteristics make it difficult to forecast traffic volume in IP access networks for appropriate capacity planning. This paper proposes a traffic forecasting method that includes a function to control residual error distribution in IP access networks. The objective of the proposed method is to grasp the statistical characteristics of peak traffic variations, while conventional methods focus on average rather than peak values. In the proposed method, a neural network model is built recursively while weighting residual errors around the peaks. This enables network operators to control the trade-off between underestimation and overestimation errors according to their planning policy. Evaluation with a total of 136 daily traffic volume data sequences measured in actual IP access networks demonstrates the performance of the proposed method.

  9. Bus Access Optimization for Distributed Embedded Systems Based on Schedulability Analysis

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2000-01-01

    We present an approach to bus access optimization and schedulability analysis for the synthesis of hard real-time distribution embedded systems. The communication model is based on a time-triggered protocol. We have developed an analysis for the communication delays proposing four different message...... scheduling policies over a time-triggered communication channel. Optimization strategies for the bus access scheme are developed, and the four approaches to message scheduling are compared using extensive experiments....

  10. 浅析Access数据库在高校招生管理系统中的应用%Application of the Access Database in Enrollment Management System

    Institute of Scientific and Technical Information of China (English)

    殷洪杰

    2012-01-01

    With the rapid development of computer technology and the progress of the Access database, the colleges and universities should fully apply the Access database in the enrollment management system. This article will briefly analyze and research the application of Access database in enrollment management system.%随着计算机技术的突飞猛进,Access数据库的进步,全国高校应该充分地将Access数据库应用于招生管理系统当中.本文就将针对Access数据库在高校招生管理系统中的应用进行简要的分析和研究.

  11. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database

    Energy Technology Data Exchange (ETDEWEB)

    Brown, S.

    2002-02-07

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO{trademark} exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages. This database includes ten ARC/INFO exported integer grid files (five with the pixel size 3.75 km x 3.75 km and five with the pixel size 0.25 degree longitude x 0.25 degree latitude) and 27 ASCII files. The first ASCII file contains the documentation associated with this database. Twenty-four of the ASCII files were generated by means of the ARC/INFO GRIDASCII command and can be used by most raster-based GIS software packages. The 24 files can be subdivided into two groups of 12 files each. These files contain real data values representing actual carbon and potential carbon density in Mg C/ha (1 megagram = 10{sup 6} grams) and integer- coded values for country name, Weck's Climatic Index, ecofloristic zone, elevation, forest or non-forest designation, population density, mean annual precipitation, slope, soil texture, and vegetation classification. One set of 12 files contains these data at a spatial resolution of 3.75 km

  12. CeCaFDB: a curated database for the documentation, visualization and comparative analysis of central carbon metabolic flux distributions explored by 13C-fluxomics.

    Science.gov (United States)

    Zhang, Zhengdong; Shen, Tie; Rui, Bin; Zhou, Wenwei; Zhou, Xiangfei; Shang, Chuanyu; Xin, Chenwei; Liu, Xiaoguang; Li, Gang; Jiang, Jiansi; Li, Chao; Li, Ruiyuan; Han, Mengshu; You, Shanping; Yu, Guojun; Yi, Yin; Wen, Han; Liu, Zhijie; Xie, Xiaoyao

    2015-01-01

    The Central Carbon Metabolic Flux Database (CeCaFDB, available at http://www.cecafdb.org) is a manually curated, multipurpose and open-access database for the documentation, visualization and comparative analysis of the quantitative flux results of central carbon metabolism among microbes and animal cells. It encompasses records for more than 500 flux distributions among 36 organisms and includes information regarding the genotype, culture medium, growth conditions and other specific information gathered from hundreds of journal articles. In addition to its comprehensive literature-derived data, the CeCaFDB supports a common text search function among the data and interactive visualization of the curated flux distributions with compartmentation information based on the Cytoscape Web API, which facilitates data interpretation. The CeCaFDB offers four modules to calculate a similarity score or to perform an alignment between the flux distributions. One of the modules was built using an inter programming algorithm for flux distribution alignment that was specifically designed for this study. Based on these modules, the CeCaFDB also supports an extensive flux distribution comparison function among the curated data. The CeCaFDB is strenuously designed to address the broad demands of biochemists, metabolic engineers, systems biologists and members of the -omics community.

  13. Discussion on the Security Agency Access Technology in Database System%数据库系统中安全代理访问技术

    Institute of Scientific and Technical Information of China (English)

    彭鲁青

    2012-01-01

      基于网络的数据库访问安全问题,即数据库远程访问安全问题成为研究热点问题,本文针对信息系统中基于广域网的数据库访问带来的非法访问、黑客攻击、数据的截取、篡改等安全问题提供了建立一个安全代理系统代理对数据库的访问的思路,并对其中整个系统结构进行分析。%  Based on network database access security problems,namely remote access database security problems be-come research hot topic,this paper in the information system based on the database access wan bring illegal access, hacker attacks,data capture,tamper with the safety problems are provided to establish a safety agency system agent the database access train of thought,and the structure of the whole system were analyzed.

  14. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  15. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  16. Teaching Three-Dimensional Structural Chemistry Using Crystal Structure Databases. 3. The Cambridge Structural Database System: Information Content and Access Software in Educational Applications

    Science.gov (United States)

    Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.

    2011-01-01

    Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…

  17. Channel access delay and buffer distribution of two-user opportunistic scheduling schemes in wireless networks

    KAUST Repository

    Hossain, Md Jahangir

    2010-07-01

    In our earlier works, we proposed rate adaptive hierarchical modulation-assisted two-best user opportunistic scheduling (TBS) and hybrid two-user scheduling (HTS) schemes. The proposed schemes are innovative in the sense that they include a second user in the transmission opportunistically using hierarchical modulations. As such the frequency of information access of the users increases without any degradation of the system spectral efficiency (SSE) compared to the classical opportunistic scheduling scheme. In this paper, we analyze channel access delay of an incoming packet at the base station (BS) buffer when our proposed TBS and HTS schemes are employed at the BS. Specifically, using a queuing analytic model we derive channel access delay as well as buffer distribution of the packets that wait at BS buffer for down-link (DL) transmission. We compare performance of the TBS and HTS schemes with that of the classical single user opportunistic schemes namely, absolute carrier-to-noise ratio (CNR)-based single user scheduling (ASS) and normalized CNR-based single user scheduling (NSS). For an independent and identically distributed (i.i.d.) fading environment, our proposed scheme can improve packet\\'s access delay performance compared to the ASS. Selected numerical results in an independent but non-identically distributed (i.n.d.) fading environment show that our proposed HTS achieves overall good channel access delay performance. © 2010 IEEE.

  18. Physical and biological data collected off the Florida coast in the North Atlantic Ocean and the Gulf of Mexico as part of the Harmful Algal Bloom Historical Database from February 5, 1954 to December 30, 1998 (NODC Accession 0000585)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In the later part of 1999, a relational Microsoft Access database was created to accommodate a wide range of data on the phytoplankton Karenia brevis. This database,...

  19. Genelab: Scientific Partnerships and an Open-Access Database to Maximize Usage of Omics Data from Space Biology Experiments

    Science.gov (United States)

    Reinsch, S. S.; Galazka, J..; Berrios, D. C; Chakravarty, K.; Fogle, H.; Lai, S.; Bokyo, V.; Timucin, L. R.; Tran, P.; Skidmore, M.

    2016-01-01

    NASA's mission includes expanding our understanding of biological systems to improve life on Earth and to enable long-duration human exploration of space. The GeneLab Data System (GLDS) is NASA's premier open-access omics data platform for biological experiments. GLDS houses standards-compliant, high-throughput sequencing and other omics data from spaceflight-relevant experiments. The GeneLab project at NASA-Ames Research Center is developing the database, and also partnering with spaceflight projects through sharing or augmentation of experiment samples to expand omics analyses on precious spaceflight samples. The partnerships ensure that the maximum amount of data is garnered from spaceflight experiments and made publically available as rapidly as possible via the GLDS. GLDS Version 1.0, went online in April 2015. Software updates and new data releases occur at least quarterly. As of October 2016, the GLDS contains 80 datasets and has search and download capabilities. Version 2.0 is slated for release in September of 2017 and will have expanded, integrated search capabilities leveraging other public omics databases (NCBI GEO, PRIDE, MG-RAST). Future versions in this multi-phase project will provide a collaborative platform for omics data analysis. Data from experiments that explore the biological effects of the spaceflight environment on a wide variety of model organisms are housed in the GLDS including data from rodents, invertebrates, plants and microbes. Human datasets are currently limited to those with anonymized data (e.g., from cultured cell lines). GeneLab ensures prompt release and open access to high-throughput genomics, transcriptomics, proteomics, and metabolomics data from spaceflight and ground-based simulations of microgravity, radiation or other space environment factors. The data are meticulously curated to assure that accurate experimental and sample processing metadata are included with each data set. GLDS download volumes indicate strong

  20. An evaluation of EMBASE within the NHS: findings of the Database Access Project working partnership to extend the knowledge base of healthcare.

    Science.gov (United States)

    Hallam, E; Plaice, C

    1999-09-01

    An earlier article in the Innovations on the Internet Series introduced the Database Access Project (DAPs) at Southmead Health Services NHS Trust, which piloted the introduction and use of EMBASE via the Internet and NHSnet. This follow-up article assesses the results of the Project, and reports on its findings. In particular, it considers the usefulness of EMBASE in terms of coverage and content for different groups of NHS users and aspects of take-up in terms of access arrangements and patterns of usage. It also considers the likely impact on the library and information service in terms of providing training and user support and meeting related demands, for example the acquisition of full-text articles as a result of increased levels of searching. The value of retaining access to EMBASE was recognized by the majority of those who participated in the Project, despite its acknowledged overlap with other databases. The coverage of the database was identified as being relevant by a majority of users; both its expanded European coverage and its coverage of drug-related and mental health literature were identified as important aspects. The project identified a clear preference for remote access to the database, although there was still a need to visit the library for retrieval of full text. Lack of time both for training and for actual database use, and lack of confidence in applying search skills and in appraising research were identified as key challenges to database searching. The authors highlight the special role of information professionals in providing training and support for NHS professionals in the acquisition of search skills and critical appraisal skills in order to encourage effective database use.

  1. Distributed Access Control Based on Proxy Signature in M2M Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lingyu Lee

    2013-05-01

    Full Text Available In this study, we have a research of the distributed access control based on proxy signature in M2M sensor networks M2M sensor networks. As M2M sensor networks are usually deployed in hostile environment, the global communication security of M2M sensor networks is and will continue to be a major concern. Although there are many related works on access control in WSNs (Wireless Sensor Networks, Ad-hoc networks, MANETs (Mobile Ad-hoc Networks and etc., they cannot be applied to M2M sensor networks directly. Motivated by this consideration, we develop a secure and distributed access control scheme based on proxy signature for M2M sensor networks, which provides strong authentication and achieves efficiency. Moreover, security of the proposed technique does not rely on availability of a secure channel.

  2. Exposing USGS sample collections for broader discovery and access: collaboration between ScienceBase, IEDA:SESAR, and Paleobiology Database

    Science.gov (United States)

    Hsu, L.; Bristol, S.; Lehnert, K. A.; Arko, R. A.; Peters, S. E.; Uhen, M. D.; Song, L.

    2014-12-01

    The U.S. Geological Survey (USGS) is an exemplar of the need for improved cyberinfrastructure for its vast holdings of invaluable physical geoscience data. Millions of discrete paleobiological and geological specimens lie in USGS warehouses and at the Smithsonian Institution. These specimens serve as the basis for many geologic maps and geochemical databases, and are a potential treasure trove of new scientific knowledge. The extent of this treasure is virtually unknown and inaccessible outside a small group of paleogeoscientists and geochemists. A team from the USGS, the Integrated Earth Data Applications (IEDA) facility, and the Paleobiology Database (PBDB) are working to expose information on paleontological and geochemical specimens for discovery by scientists and citizens. This project uses existing infrastructure of the System for Earth Sample Registration (SESAR) and PBDB, which already contains much of the fundamental data schemas that are necessary to accommodate USGS records. The project is also developing a new Linked Data interface for the USGS National Geochemical Database (NGDB). The International Geo Sample Number (IGSN) is the identifier that links samples between all systems. For paleontological specimens, SESAR and PBDB will be the primary repositories for USGS records, with a data syncing process to archive records within the USGS ScienceBase system. The process began with mapping the metadata fields necessary for USGS collections to the existing SESAR and PBDB data structures, while aligning them with the Observations & Measurements and Darwin Core standards. New functionality needed in SESAR included links to a USGS locality registry, fossil classifications, a spatial qualifier attribution for samples with sensitive locations, and acknowledgement of data and metadata licensing. The team is developing a harvesting mechanism to periodically transfer USGS records from within PBDB and SESAR to ScienceBase. For the NGDB, the samples are being

  3. Glia Open Access Database (GOAD): A comprehensive gene expression encyclopedia of glia cells in health and disease.

    Science.gov (United States)

    Holtman, Inge R; Noback, Michiel; Bijlsma, Marieke; Duong, Kim N; van der Geest, Marije A; Ketelaars, Peer T; Brouwer, Nieske; Vainchtein, Ilia D; Eggen, Bart J L; Boddeke, Hendrikus W G M

    2015-09-01

    Recently, the number of genome-wide transcriptome profiles of pure populations of glia cells has drastically increased, resulting in an unprecedented amount of data that offer opportunities to study glia phenotypes and functions in health and disease. To make genome-wide transcriptome data easily accessible, we developed the Glia Open Access Database (GOAD), available via www.goad.education. GOAD contains a collection of previously published and unpublished transcriptome data, including datasets from isolated microglia, astrocytes and oligodendrocytes both at homeostatic and pathological conditions. It contains an intuitive web-based interface that consists of three features that enable searching, browsing, analyzing, and downloading of the data. The first feature is differential gene expression (DE) analysis that provides genes that are significantly up and down-regulated with the associated fold changes and p-values between two conditions of interest. In addition, an interactive Venn diagram is generated to illustrate the overlap and differences between several DE gene lists. The second feature is quantitative gene expression (QE) analysis, to investigate which genes are expressed in a particular glial cell type and to what degree. The third feature is a search utility, which can be used to find a gene of interest and depict its expression in all available expression data sets by generating a gene card. In addition, quality guidelines and relevant concepts for transcriptome analysis are discussed. Finally, GOAD is discussed in relation to several online transcriptome tools developed in neuroscience and immunology. In conclusion, GOAD is a unique platform to facilitate integration of bioinformatics in glia biology.

  4. Snag distributions in relation to human access in ponderosa pine forests

    Science.gov (United States)

    Jeff P. Hollenbeck; Lisa J. Bate; Victoria A. Saab; John F. Lehmkuhl

    2013-01-01

    Ponderosa pine (Pinus ponderosa) forests in western North America provide habitat for numerous cavity-using wildlife species that often select large-diameter snags for nesting and roosting. Yet large snags are often removed for their commercial and firewood values. Consequently we evaluated effects of human access on snag densities and diameter-class distributions at...

  5. Bus Access Optimisation for FlexRay-based Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Traian; Pop, Paul; Eles, Petru

    2007-01-01

    -real time communication in a deterministic manner. In this paper, we propose techniques for optimising the FlexRay bus access mechanism of a distributed system, so that the hard real-time deadlines are met for all the tasks and messages in the system. We have evaluated the proposed techniques using...

  6. 通过CORBA规范访问数据库的方法和途径%The Methods of Accessing Database by CORBA Specification

    Institute of Scientific and Technical Information of China (English)

    鲍剑洋; 吴文清

    2001-01-01

    文章提出了通过CORBA规范访问数据库的途径,探讨了通过CORBA开发应用程序的基本步骤。%In this paper,the way of accessing database by CORBA specification is proposed,the prodedure of developing CORBA program is investigated.

  7. Design and implementation of a distributed large-scale spatial database system based on J2EE

    Science.gov (United States)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  8. Equity in Distribution of Health Care Resources; Assessment of Need and Access, Using Three Practical Indicators.

    Science.gov (United States)

    Omrani-Khoo, Habib; Lotfi, Farhad; Safari, Hossein; Zargar Balaye Jame, Sanaz; Moghri, Javad; Shafii, Milad

    2013-11-01

    Equitable distribution of health system resources has been a serious challenge for long ago among the health policy makers. Conducted studies have mostly ever had emphasis on equality rather than equity. In this paper we have attempted to examine both equality and equity in resources distribution. This is an applied and descriptive study in which we plotted Lorenz and concentration curves to describe graphically the distribution of hemodialysis beds and nephrologists as two complementary resources in health care in relation to hemodialysis patients. To end this, inequality and inequity were measured by calculating Gini- coefficient, concentration and Robin Hood indices. We used STATA and EXCEL software to calculate indicators. The results showed that inequality was not seen in hemodialysis beds in population level. However, distribution of nephrologists without considering population needs was accompanied with some sort of inequality. Gini- coefficient for beds and nephrologists distribution in population level was respectively 0.02 and 0.38. Hence, calculation of concentration index for distribution of hemodialysis beds and nephrologists with regard to population needs indicated that unlike beds distribution, equity gap between nephrologists distribution against patients distribution among the provinces was considerably significant again. Our results imply that although hemodialysis beds in Iran have been distributed in connection with the population need, nephrologists' distribution is not the same as hemodialysis beds one and this imbalance in complementary resources, can affect both efficiency and equitable access to services for population.

  9. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  10. On the JDBC-based Database Access Technology%浅析基于JDBC的数据库访问技术

    Institute of Scientific and Technical Information of China (English)

    刘拥

    2012-01-01

      As an effective data storage and management tool, database technology has been widely used, Java provides an JDBC API which not only offers a variety of database driver type, but also provides implementation of the SQL statements to manipu⁃late relational database approach, so that Java applications have the ability to access to different types of databases.%  作为一种有效的数据存贮和管理工具,数据库技术得到了广泛的应用,Java提供的JDBC API提供多种数据库驱动程序类型,提供执行SQL语句来操作关系数据库的方法,使Java应用程序具有访问不同类型的数据库的能力。

  11. One4All Cooperative Media Access Strategy in Infrastructure Based Distributed Wireless Networks

    DEFF Research Database (Denmark)

    Zhang, Qi; Fitzek, Frank H.P.; Iversen, Villy Bæk

    2008-01-01

    In this paper we propose the one4all cooperative access strategy to introduce a more efficient media access strategy for wireless networks. The one4all scheme is designed for the infrastructure based distributed wireless network architecture. The basic idea is that mobile devices can form...... a cooperative cluster using their short-range air interface and one device contends the channel for all the devices within the cluster. This strategy reduces the number of mobile devices involved in the collision process for the wireless medium resulting in larger throughput, smaller access delay, and less...... energy consumption. Based on an analytical model, the proposed strategy is compared with the two existing strategies RTS/CTS (request to send/ clear to send) and packet aggregation. The results show that the proposed cooperative scheme has similar throughput performance as packet aggregation and it has...

  12. Analysis of the 802.11e Enhanced Distributed Channel Access Function

    CERN Document Server

    Inan, Inanc; Ayanoglu, Ender

    2007-01-01

    The IEEE 802.11e standard revises the Medium Access Control (MAC) layer of the former IEEE 802.11 standard for Quality-of-Service (QoS) provision in the Wireless Local Area Networks (WLANs). The Enhanced Distributed Channel Access (EDCA) function of 802.11e defines multiple Access Categories (AC) with AC-specific Contention Window (CW) sizes, Arbitration Interframe Space (AIFS) values, and Transmit Opportunity (TXOP) limits to support MAC-level QoS and prioritization. We propose an analytical model for the EDCA function which incorporates an accurate CW, AIFS, and TXOP differentiation at any traffic load. The proposed model is also shown to capture the effect of MAC layer buffer size on the performance. Analytical and simulation results are compared to demonstrate the accuracy of the proposed approach for varying traffic loads, EDCA parameters, and MAC layer buffer space.

  13. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ∼175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  14. 关于网站使用ACCESS数据库安全性的分析%Security Analysis of Application of ACCESS database in Website

    Institute of Scientific and Technical Information of China (English)

    安晓瑞

    2012-01-01

    数据库是网站运行的基础.Access数据库具有界面友好、易学易用、开发简单、接口灵活等特点,是一个典型的数据管理和信息系统开发工具.大部分中企事业单位的网站都选择Access数据库.然而Access数据库存在安全隐患和极易被攻击等问题,为服务器及网站带来安全隐患.本文从影响Access数据库安全性的几个方面入手,做一些深入的探讨,希望能增强网站管理员的安全防范意识和管理水平.%The database plays a foundamental role in website operation.Access database characteristed by friendly interface,easy reach,simple development,flexible port and other features is a typical tool for data management and information system development.Most enterprises website have chosen Access database,however the Access database still has security hidden troubles and consequently is extremely vulnerable to attack which brings the server and website potential security problems.This paper aims to immerge into the aspects that influence the Access database security,hoping to strengthen the safety awareness of site administrator and improve concernning management level.

  15. Teaching Three-Dimensional Structural Chemistry Using Crystal Structure Databases. 2. Teaching Units that Utilize an Interactive Web-Accessible Subset of the Cambridge Structural Database

    Science.gov (United States)

    Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.

    2010-01-01

    A series of online interactive teaching units have been developed that illustrate the use of experimentally measured three-dimensional (3D) structures to teach fundamental chemistry concepts. The units integrate a 500-structure subset of the Cambridge Structural Database specially chosen for their pedagogical value. The units span a number of key…

  16. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    Science.gov (United States)

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  17. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    Science.gov (United States)

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  18. National Radiobiology Archives Distributed Access User`s Manual, Version 1.1. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Smith, S.K.; Prather, J.C.; Ligotke, E.K.; Watson, C.R.

    1992-06-01

    This supplement to the NRA Distributed Access User`s manual (PNL-7877), November 1991, describes installation and use of Version 1.1 of the software package; this is not a replacement of the previous manual. Version 1.1 of the NRA Distributed Access Package is a maintenance release. It eliminates several bugs, and includes a few new features which are described in this manual. Although the appearance of some menu screens has changed, we are confident that the Version 1.0 User`s Manual will provide an adequate introduction to the system. Users who are unfamiliar with Version 1.0 may wish to experiment with that version before moving on to Version 1.1.

  19. National Radiobiology Archives Distributed Access User's Manual, Version 1. 1

    Energy Technology Data Exchange (ETDEWEB)

    Smith, S.K.; Prather, J.C.; Ligotke, E.K.; Watson, C.R.

    1992-06-01

    This supplement to the NRA Distributed Access User's manual (PNL-7877), November 1991, describes installation and use of Version 1.1 of the software package; this is not a replacement of the previous manual. Version 1.1 of the NRA Distributed Access Package is a maintenance release. It eliminates several bugs, and includes a few new features which are described in this manual. Although the appearance of some menu screens has changed, we are confident that the Version 1.0 User's Manual will provide an adequate introduction to the system. Users who are unfamiliar with Version 1.0 may wish to experiment with that version before moving on to Version 1.1.

  20. Distributed Fair Access Point Selection for Multi-Rate IEEE 802.11 WLANs

    Science.gov (United States)

    Gong, Huazhi; Nahm, Kitae; Kim, Jongwon

    In IEEE 802.11 networks, the access point (AP) selection based on the strongest signal strength often results in the extremely unfair bandwidth allocation among mobile users (MUs). In this paper, we propose a distributed AP selection algorithm to achieve a fair bandwidth allocation for MUs. The proposed algorithm gradually balances the AP loads based on max-min fairness for the available multiple bit rate choices in a distributed manner. We analyze the stability and overhead of the proposed algorithm, and show the improvement of the fairness via computer simulation.

  1. The Pork Consumption and Distribution in Urban Areas of Vietnam before WTO Accession

    OpenAIRE

    Nguyen, Thuy Minh; YUTAKA, Tomoyuki; Fukuda, Susumu; Kai, Satoshi

    2006-01-01

    Pork is the most popular type of meat in the daily meals of Vietnamese people. Pig industry provides jobs to a great number of Vietnamese farmers who in most of the cases run a very small-scaled operation. This paper investigates the current situation of the pig and pork consumption and distribution in Vietnam before the entry to WTO which is estimated by October 2006. The writers also try to make some estimation about the changes in the pig production and distribution after the accession. It...

  2. Distributed joint power and access control algorithm for secondary spectrum sharing

    Science.gov (United States)

    Li, Hongyan; Chen, Enqing; Fu, Hongliang

    2010-08-01

    Based on interference temperature model, the problem of efficient secondary spectrum sharing is formulated as a power optimization problem with some constraints at physical layer. These constraints and optimization objective limit a feasible power vector set which leads to the need of access control besides power control. In this paper, we consider the decentralized cognitive radio network scenario where short-term data service is required, and the problem of distributed joint power and access control is studied to maximize the total secondary system throughput, subject to Quality of Service (QoS) constraints from individual secondary users and interference temperature limit (ITL) from primary system. Firstly, a pricing-based game model was used to solve distributed power allocation optimization problem in both high and low signal to interference noise ratio (SINR) scenarios. Secondly, when not all the secondary links can be supported with their QoS requirement and ITL, a distributed joint power and access control algorithm was introduced to find the allowable links which results in maximum network throughput with all the constraints satisfied, and the convergence performance is tested by simulations.

  3. Development and Validation of a Preprocedural Risk Score to Predict Access Site Complications After Peripheral Vascular Interventions Based on the Vascular Quality Initiative Database

    Directory of Open Access Journals (Sweden)

    Daniel Ortiz

    2016-01-01

    Full Text Available Purpose: Access site complications following peripheral vascular intervention (PVI are associated with prolonged hospitalization and increased mortality. Prediction of access site complication risk may optimize PVI care; however, there is no tool designed for this. We aimed to create a clinical scoring tool to stratify patients according to their risk of developing access site complications after PVI. Methods: The Society for Vascular Surgery’s Vascular Quality Initiative database yielded 27,997 patients who had undergone PVI at 131 North American centers. Clinically and statistically significant preprocedural risk factors associated with in-hospital, post-PVI access site complications were included in a multivariate logistic regression model, with access site complications as the outcome variable. A predictive model was developed with a random sample of 19,683 (70% PVI procedures and validated in 8,314 (30%. Results: Access site complications occurred in 939 (3.4% patients. The risk tool predictors are female gender, age > 70 years, white race, bedridden ambulatory status, insulin-treated diabetes mellitus, prior minor amputation, procedural indication of claudication, and nonfemoral arterial access site (model c-statistic = 0.638. Of these predictors, insulin-treated diabetes mellitus and prior minor amputation were protective of access site complications. The discriminatory power of the risk model was confirmed by the validation dataset (c-statistic = 0.6139. Higher risk scores correlated with increased frequency of access site complications: 1.9% for low risk, 3.4% for moderate risk and 5.1% for high risk. Conclusions: The proposed clinical risk score based on eight preprocedural characteristics is a tool to stratify patients at risk for post-PVI access site complications. The risk score may assist physicians in identifying patients at risk for access site complications and selection of patients who may benefit from bleeding avoidance

  4. MAAC: a software tool for user authentication and access control to the electronic patient record in an open distributed environment

    Science.gov (United States)

    Motta, Gustavo H.; Furuie, Sergio S.

    2004-04-01

    Designing proper models for authorization and access control for the electronic patient record (EPR) is essential to wide scale use of the EPR in large health organizations. This work presents MAAC (Middleware for Authentication and Access Control), a tool that implements a contextual role-based access control (RBAC) authorization model. RBAC regulates user"s access to computers resources based on their organizational roles. A contextual authorization uses environmental information available at access-request time, like user/patient relationship, in order to decide whether a user has the right to access an EPR resource. The software architecture where MAAC is implemented uses Lightweight Directory Access Protocol, Java programming language and the CORBA/OMG standards CORBA Security Service and Resource Access Decision Facility. With those open and distributed standards, heterogeneous EPR components can request user authentication and access authorization services in a unified and consistent fashion across multiple platforms.

  5. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a transa

  6. Indexed University presses: overlap and geographical distribution in five book assessment databases

    Energy Technology Data Exchange (ETDEWEB)

    Mañana-Rodriguez, J.; Gimenez-Toledo, E

    2016-07-01

    Scholarly books have been a periphery among the objects of study of bibliometrics until recent developments provided tools for assessment purposes. Among scholarly book publishers, University Presses (UPs hereinafter), subject to specific ends and constrains in their publishing activity, might also remain on a second-level periphery despite their relevance as scholarly book publishers. In this study the authors analyze the absolute and relative presence, overlap and uniquely-indexed cases of 503 UPs by country, among five assessment-oriented databases containing data on scholarly book publishers: Book Citation Index, Scopus, Scholarly Publishers Indicators (Spain), the lists of publishers from the Norwegian System (CRISTIN) and the lists of publishers from the Finnish System (JUFO). The comparison between commercial databases and public, national databases points towards a differential pattern: prestigious UPs in the English Speaking world represent larger shares and there is a higher overall percentage of UPs in the commercial databases, while the richness and diversity is higher in the case of national databases. Explicit or de facto biases towards production in English by commercial databases, as well as diverse indexation criteria might explain the differences observed. The analysis of the presence of UPs in different numbers of databases by country also provides a general picture of the average degree of diffusion of UPs among information systems. The analysis of ‘endemic’ UPs, those indexed only in one of the five databases points out to strongly different compositions of UPs in commercial and non-commercial databases. A combination of commercial and non commercial databases seems to be the optimal option for assessment purposes while the validity and desirability of the ongoing debate on the role of UPs can be also concluded. (Author)

  7. Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

    Science.gov (United States)

    Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

    2012-12-01

    The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2

  8. Supporting the Construction of Workflows for Biodiversity Problem-Solving Accessing Secure, Distributed Resources

    Directory of Open Access Journals (Sweden)

    J.S. Pahwa

    2006-01-01

    Full Text Available In the Biodiversity World (BDW project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to explain past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack inherent interoperability. The present BDW system brings all these disparate units together so that the user can combine tools with little thought as to their original availability, data formats and interoperability. The new prototype BDW system architecture not only brings together heterogeneous resources but also enables utilisation of computational resources and provides a secure access to BDW resources via a federated security model. We describe features of the new BDW system and its security model which enable user authentication from a workflow application as part of workflow execution.

  9. Oceans of Data: In what ways can learning research inform the development of electronic interfaces and tools for use by students accessing large scientific databases?

    Science.gov (United States)

    Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.

    2012-12-01

    The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user

  10. Practical Quantum Private Database Queries Based on Passive Round-Robin Differential Phase-shift Quantum Key Distribution

    Science.gov (United States)

    Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min

    2016-08-01

    A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security.

  11. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes) with distribution in Mexico

    Science.gov (United States)

    Pinilla-Buitrago, Gonzalo; Martínez-Morales, Miguel Angel; González-García, Fernando; Enríquez, Paula L.; Rangel-Salazar, José Luis; Romero, Carlos Alberto Guichard; Navarro-Sigüenza, Adolfo G.; Monterrubio-Rico, Tiberio César; Escalona-Segura, Griselda

    2014-01-01

    Abstract Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality. Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first comprehensive

  12. Fast distributed strategic learning for global optima in queueing access games

    KAUST Repository

    Tembine, Hamidou

    2014-08-24

    In this paper we examine combined fully distributed payoff and strategy learning (CODIPAS) in a queue-aware access game over a graph. The classical strategic learning analysis relies on vanishing or small learning rate and uses stochastic approximation tool to derive steady states and invariant sets of the underlying learning process. Here, the stochastic approximation framework does not apply due to non-vanishing learning rate. We propose a direct proof of convergence of the process. Interestingly, the convergence time to one of the global optima is almost surely finite and we explicitly characterize the convergence time. We show that pursuit-based CODIPAS learning is much faster than the classical learning algorithms in games. We extend the methodology to coalitional learning and proves a very fast formation of coalitions for queue-aware access games where the action space is dynamically changing depending on the location of the user over a graph.

  13. A distributed Synchronous reservation multiple access control protocol for mobile Ad hoc networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yanling; SUN Xianpu; LI Jiandong

    2007-01-01

    This study proposes a new multiple access control protocol named distributed synchronous reservation multiple access control protocol.in which the hidden and exposed terminal problems are solved,and the quality of service(QoS)requirements for real-time traffic are guaranteed.The protocol is founded on time division multiplex address and a different type of traffic is assigned to difierent priority,according to which a node should compete for and reserve the free slots in a different method.Moreover,there is a reservation acknowledgement process before data transmit in each reserved slot,so that the intruded terminal problem is solved.The throughput and average packets drop probability of this protocol are analyzed and simulated in a fully connected network.the results of which indicate that this protocol is efficient enough to support the real-time traffic.and it is more suitable to MANETs.

  14. Distributed Joint Source-Channel Coding on a Multiple Access Channel with Side Information

    CERN Document Server

    Rajesh, R

    2008-01-01

    We consider the problem of transmission of several distributed sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The source and/or the channel could have continuous alphabets (thus Gaussian sources and Gaussian MACs are special cases). Various previous results are obtained as special cases. We also provide several good joint source-channel coding schemes for a discrete/continuous source and discrete/continuous alphabet channel. Channels with feedback and fading are also considered. Keywords: Multiple access channel, side information, lossy joint source-channel coding, channels with feedback, fading channels.

  15. PhID: an open-access integrated pharmacology interactions database for drugs, targets, diseases, genes, side-effects and pathways.

    Science.gov (United States)

    Deng, Zhe; Tu, Weizhong; Deng, Zixin; Hu, Qian-Nan

    2017-09-14

    The current network pharmacology study encountered a bottleneck with a lot of public data scattered in different databases. There is the lack of open-access and consolidated platform that integrates this information for systemic research. To address this issue, we have developed PhID, an integrated pharmacology database which integrates >400,000 pharmacology elements (drug, target, disease, gene, side-effect, and pathway) and >200,000 element interactions in branches of public databases. The PhID has three major applications: (1) assists scientists searching through the overwhelming amount of pharmacology elements interaction data by names, public IDs, molecule structures, or molecular sub-structures; (2) helps visualizing pharmacology elements and their interactions with a web-based network graph; (3) provides prediction of drug-target interactions through two modules: PreDPI-ki and FIM, by which users can predict drug-target interactions of the PhID entities or some drug-target pairs they interest. To get a systems-level understanding of drug action and disease complexity, PhID as a network pharmacology tool was established from the perspective of data layer, visualization layer and prediction model layer to present information untapped by current databases. Database URL: http://phid.ditad.org/.

  16. S2RSLDB: a comprehensive manually curated, internet-accessible database of the sigma-2 receptor selective ligands.

    Science.gov (United States)

    Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele

    2017-01-01

    Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ1) and sigma-2 (σ2). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ2 receptor structure has not been disclosed, and researchers rely on σ2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.

  17. Preparing for open access : distribution rate order application to the Ontario Energy Board 1999-2000

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-07

    The Ontario Hydro Services Company (OHSC) Inc. filed an application with the Ontario Energy Board (OEB) requesting grant of an order approving the revenue requirements for the Company`s distribution business, including those for distribution in remote communities, for the years 1999 and 2000, up until the point of open access. The revenue requirement for 1999 is $701 million, for the year 2000 it is $640 million. OHSC is a successor company to Ontario Hydro and it will become operational in its new incarnation on April 1, 1999. This marks the beginning of regulation of OHSC`s distribution business by the OEB, following the restructuring the electricity industry in Ontario. Restructuring ended the monopoly position of Ontario Hydro and introduced competition to the generation and retailing sectors, and regulation to the transmission and distribution sectors of the industry. The document sets out the circumstances leading up to the restructuring of the industry, the unbundling of Ontario Hydro into separate generation, transmission and distribution companies, outlines the new regulatory framework and provides the justification for the revenue requirements.

  18. Robust Distributed Estimation over Multiple Access Channels with Constant Modulus Signaling

    CERN Document Server

    Tepedelenlioglu, Cihan

    2009-01-01

    A distributed estimation scheme where the sensors transmit with constant modulus signals over a multiple access channel is considered. The proposed estimator is shown to be strongly consistent for any sensing noise distribution in the i.i.d. case both for a per-sensor power constraint, and a total power constraint. When the distributions of the sensing noise are not identical, a bound on the variances is shown to establish strong consistency. The estimator is shown to be asymptotically normal with a variance (AsV) that depends on the characteristic function of the sensing noise. Optimization of the AsV is considered with respect to a transmission phase parameter for a variety of noise distributions exhibiting differing levels of impulsive behavior. The robustness of the estimator to impulsive sensing noise distributions such as those with positive excess kurtosis, or those that do not have finite moments is shown. The proposed estimator is favorably compared with the amplify and forward scheme under an impuls...

  19. Distributed multimedia database technologies supported by MPEG-7 and MPEG-21

    CERN Document Server

    Kosch, Harald

    2003-01-01

    15 Introduction Multimedia Content: Context Multimedia Systems and Databases (Multi)Media Data and Multimedia Metadata Purpose and Organization of the Book MPEG-7: The Multimedia Content Description Standard Introduction MPEG-7 and Multimedia Database Systems Principles for Creating MPEG-7 Documents MPEG-7 Description Definition Language Step-by-Step Approach for Creating an MPEG-7 Document Extending the Description Schema of MPEG-7 Encoding and Decoding of MPEG-7 Documents for Delivery-Binary Format for MPEG-7 Audio Part of MPEG-7 MPEG-7 Supporting Tools and Referen

  20. Modified Distributed Medium Access Control Algorithm Based on Multi-Packets Reception in Ad Hoc Networks

    Institute of Scientific and Technical Information of China (English)

    ZHENG Qing; YANG Zhen

    2005-01-01

    Based on the Multi-Packet Reception(MPR)capability at the physical layer and the Distributed Coordination Function(DCF)of the IEEE 802.11 MAC protocol,we propose a modified new solution about WAITING mechanism to make full use of the MPR capability in this paper,which is named as modified distributed medium access control algorithm.We describe the details of each step of the algorithm after introducing the WAITING mechanism.Then,we also analyze how the waiting-time affects the throughput performance of the network.The network simulator NS-2 is used to evaluate the throughput performance of the new WAITING algorithm and we compare it with IEEE 802.11 MAC protocol and the old WAITING algorithm.The experimental results show that our new algorithm has the best performance.

  1. LHC Databases on the Grid: Achievements and Open Issues

    CERN Document Server

    Vaniachine, A V

    2010-01-01

    To extract physics results from the recorded data, the LHC experiments are using Grid computing infrastructure. The event data processing on the Grid requires scalable access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are critical for the event data reconstruction processing steps and often required for physics analysis. This paper reviews LHC experience with database technologies for the Grid computing. List of topics includes: database integration with Grid computing models of the LHC experiments; choice of database technologies; examples of database interfaces; distributed database applications (data complexity, update frequency, data volumes and access patterns); scalability of database access in the Grid computing environment of the LHC experiments. The review describes areas in which substantial progress was made and remaining open issues.

  2. Availability and distribution of, and geographic access to emergency obstetric care in Zambia.

    Science.gov (United States)

    Gabrysch, Sabine; Simushi, Virginia; Campbell, Oona M R

    2011-08-01

    To assess the availability and coverage of emergency obstetric care (EmOC) services in Zambia. Reported provision of EmOC signal functions in the Zambian Health Facility Census and additional criteria on staffing, opening hours, and referral capacity were used to classify all Zambian health facilities as providing comprehensive EmOC, basic EmOC, or more limited care. Geographic accessibility of EmOC services was estimated by linking health facility data with data from the Zambian population census. Few Zambian health facilities provided all basic EmOC signal functions and had qualified health professionals available on a 24-hour basis. Of the 1131 Zambian delivery facilities, 135 (12%) were classified as providing EmOC. Zambia nearly met the UN EmOC density benchmarks nationally, but EmOC facilities and health professionals were unevenly distributed between provinces. Geographic access to EmOC services in rural areas was low; in most provinces, less than 25% of the population lived within 15 km of an EmOC facility. A national Health Facility Census with geographic information is a valuable tool for assessing service availability and coverage at national and subnational levels. Simultaneously assessing health worker density and geographic access adds crucial information. Copyright © 2011 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Analysis of the access patterns at GSFC distributed active archive center

    Science.gov (United States)

    Johnson, Theodore; Bedet, Jean-Jacques

    1996-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational for more than two years. Its mission is to support existing and pre Earth Observing System (EOS) Earth science datasets, facilitate the scientific research, and test Earth Observing System Data and Information System (EOSDIS) concepts. Over 550,000 files and documents have been archived, and more than six Terabytes have been distributed to the scientific community. Information about user request and file access patterns, and their impact on system loading, is needed to optimize current operations and to plan for future archives. To facilitate the management of daily activities, the GSFC DAAC has developed a data base system to track correspondence, requests, ingestion and distribution. In addition, several log files which record transactions on Unitree are maintained and periodically examined. This study identifies some of the users' requests and file access patterns at the GSFC DAAC during 1995. The analysis is limited to the subset of orders for which the data files are under the control of the Hierarchical Storage Management (HSM) Unitree. The results show that most of the data volume ordered was for two data products. The volume was also mostly made up of level 3 and 4 data and most of the volume was distributed on 8 mm and 4 mm tapes. In addition, most of the volume ordered was for deliveries in North America although there was a significant world-wide use. There was a wide range of request sizes in terms of volume and number of files ordered. On an average 78.6 files were ordered per request. Using the data managed by Unitree, several caching algorithms have been evaluated for both hit rate and the overhead ('cost') associated with the movement of data from near-line devices to disks. The algorithm called LRU/2 bin was found to be the best for this workload, but the STbin algorithm also worked well.

  4. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    Science.gov (United States)

    Pham, L. B.; Eng, E.; Sweatman, P.

    2003-12-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGIS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGIS protocol to provide data in GIS-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an

  5. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  6. Right time, right place: improving access to health service through effective retention and distribution of health workers.

    Science.gov (United States)

    Crettenden, Ian; Poz, Mario Dal; Buchan, James

    2013-11-25

    This editorial introduces the 'Right time, Right place: improving access to health service through effective retention and distribution of health workers' thematic series. This series draws from studies in a range of countries and provides new insights into what can be done to improve access to health through more effective human resources policies, planning and management. The primary focus is on health workforce distribution and retention.

  7. Spatial analysis of cattle and shoat population in Ethiopia: growth trend, distribution and market access.

    Science.gov (United States)

    Leta, Samson; Mesele, Frehiwot

    2014-01-01

    The livestock subsector has an enormous contribution to Ethiopia's national economy and livelihoods of many Ethiopians. The subsector contributes about 16.5% of the national Gross Domestic Product (GDP) and 35.6% of the agricultural GDP. It also contributes 15% of export earnings and 30% of agricultural employment. The livestock subsector currently support and sustain livelihoods for 80% of all rural population. The GDP of livestock related activities valued at 59 billion birr. Ethiopian livestock population trends, distribution and marketing vary considerably across space and time due to a variety of reasons. This study was aimed to assess cattle and shoat population growth trend, distribution and their access to market. Regression analysis was used to assess the cattle and shoat population growth trend and Geographic Information Systems (GIS) techniques were used to determine the spatial distribution of cattle and shoats, and their relative access to market. The data sets used are agricultural census (2001/02) and annual CSA agricultural sample survey (1995/96 to 2012/13). In the past eighteen years, the livestock population namely cattle, sheep and goat grew from 54.5 million to over 103.5 million with average annual increment of 3.4 million. The current average national cattle, sheep and goat population per km(2) are estimated to be 71, 33 and 29 respectively (excluding Addis Ababa, Afar and Somali regions). From the total livestock population the country owns about 46% cattle, 43% sheep and 40% goats are reared within 10 km radius from major livestock market centres and all-weather roads. On the other hand, three fourth of the country's land mass which comprises 15% of the cattle, 20% of the sheep and 21% of goat population is not accessible to market (greater than 30 km from major livestock market centres). It is found that the central highland regions account for the largest share of livestock population and also more accessible to market. Defining the

  8. Portable source initiative for distribution of cross-platform compatible multispectral databases

    Science.gov (United States)

    Nichols, William K.

    2003-09-01

    In response to popular demand, The Visual Group has undertaken an effort known as the Portable Source Initiative, a program intended to create cross-platform compatible multi-spectral databases by building a managed source set of data and expending a minimal amount of effort republishing run-time formatted proprietary databases. By building visual and sensor databases using a variety of sources, then feeding all value-added work back into standard, open, widely used source formats, databases can be published from this "refined source data" in a relatively simple, automated, and repeatable fashion. To this end, with the endorsement of the Army's PM Cargo, we have offered a sample set of the source data we are building for the CH-47F TFPS program to the visual simulation industry at large to be republished into runtime formats. The results of this effort were an overwhelming acceptance of the concepts and theories within, and support from both industry and multi-service flight simulation teams.

  9. Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs

    Science.gov (United States)

    Zhu, Yanfeng; Niu, Zhisheng

    Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.

  10. Distributed Robust Power Minimization for the Downlink of Multi-Cloud Radio Access Networks

    KAUST Repository

    Dhifallah, Oussama Najeeb

    2017-02-07

    Conventional cloud radio access networks assume single cloud processing and treat inter-cloud interference as background noise. This paper considers the downlink of a multi-cloud radio access network (CRAN) where each cloud is connected to several base-stations (BS) through limited-capacity wireline backhaul links. The set of BSs connected to each cloud, called cluster, serves a set of pre-known mobile users (MUs). The performance of the system becomes therefore a function of both inter-cloud and intra-cloud interference, as well as the compression schemes of the limited capacity backhaul links. The paper assumes independent compression scheme and imperfect channel state information (CSI) where the CSI errors belong to an ellipsoidal bounded region. The problem of interest becomes the one of minimizing the network total transmit power subject to BS power and quality of service constraints, as well as backhaul capacity and CSI error constraints. The paper suggests solving the problem using the alternating direction method of multipliers (ADMM). One of the highlight of the paper is that the proposed ADMM-based algorithm can be implemented in a distributed fashion across the multi-cloud network by allowing a limited amount of information exchange between the coupled clouds. Simulation results show that the proposed distributed algorithm provides a similar performance to the centralized algorithm in a reasonable number of iterations.

  11. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    Energy Technology Data Exchange (ETDEWEB)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome; /Brookhaven; Hanushevsky, Andrew; /SLAC; Shoshani, Arie; /LBL, Berkeley; Sim, Alex; /LBL, Berkeley; Gu, Junmin; /LBL, Berkeley

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  12. An open-access database of grape harvest dates for climate research: data description and quality assessment

    Directory of Open Access Journals (Sweden)

    V. Daux

    2012-09-01

    Full Text Available We present an open-access dataset of grape harvest dates (GHD series that has been compiled from international, French and Spanish literature and from unpublished documentary sources from public organizations and from wine-growers. As of June 2011, this GHD dataset comprises 380 series mainly from France (93% of the data as well as series from Switzerland, Italy, Spain and Luxemburg. The series have variable length (from 1 to 479 data, mean length of 45 data and contain gaps of variable sizes (mean ratio of observations/series length of 0.74. The longest and most complete ones are from Burgundy, Switzerland, Southern Rhône valley, Jura and Ile-de-France. The most ancient harvest date of the dataset is in 1354 in Burgundy.

    The GHD series were grouped into 27 regions according to their location, to geomorphological and geological criteria, and to past and present grape varieties. The GHD regional composite series (GHD-RCS were calculated and compared pairwise to assess their reliability assuming that series close to one another are highly correlated. Most of the pairwise correlations are significant (p-value < 0.001 and strong (mean pairwise correlation coefficient of 0.58. As expected, the correlations tend to be higher when the vineyards are closer. The highest correlation (R = 0.91 is obtained between the High Loire Valley and the Ile-de-France GHD-RCS.

    The strong dependence of the vine cycle on temperature and, therefore, the strong link between the harvest dates and the temperature of the growing season was also used to test the quality of the GHD series. The strongest correlations are obtained between the GHD-RCS and the temperature series of the nearest weather stations. Moreover, the GHD-RCS/temperature correlation maps show spatial patterns similar to temperature correlation maps. The stability of the correlations over time is explored. The most striking feature is their generalised deterioration at the

  13. SeaDataNet - Pan-European infrastructure for marine and ocean data management: Unified access to distributed data sets

    Science.gov (United States)

    Schaap, D. M. A.; Maudire, G.

    2009-04-01

    generated by participating data centres, directly from their databases. CDI-partners can make use of dedicated SeaDataNet Tools to generate CDI XML files automatically. Approach for SeaDataNet V1 and V2: The approach for SeaDataNet V1 and V2, which is in line with the INSPIRE Directive, comprises the following services: Discovery services = Metadata directories Security services = Authentication, Authorization & Accounting (AAA) Delivery services = Data access & downloading of datasets Viewing services = Visualisation of metadata, data and data products Product services = Generic and standard products Monitoring services = Statistics on usage and performance of the system Maintenance services = Updating of metadata by SeaDataNet partners The services will be operated over a distributed network of interconnected Data Centres accessed through a central Portal. In addition to service access the portal will provide information on data management standards, tools and protocols. The architecture has been designed to provide a coherent system based on V1 services, whilst leaving the pathway open for later extension with V2 services. For the implementation, a range of technical components have been defined. Some are already operational with the remainder in the final stages of development and testing. These make use of recent web technologies, and also comprise Java components, to provide multi-platform support and syntactic interoperability. To facilitate sharing of resources and interoperability, SeaDataNet has adopted SOAP Web Service technology. The SeaDataNet architecture and components have been designed to handle all kinds of oceanographic and marine environmental data including both in-situ measurements and remote sensing observations. The V1 technical development is ready and the V1 system is now being implemented and adopted by all participating data centres in SeaDataNet. Interoperability: Interoperability is the key to distributed data management system success and it

  14. Isomerizing olefin metathesis as a strategy to access defined distributions of unsaturated compounds from fatty acids.

    Science.gov (United States)

    Ohlmann, Dominik M; Tschauder, Nicole; Stockis, Jean-Pierre; Goossen, Käthe; Dierker, Markus; Goossen, Lukas J

    2012-08-22

    The dimeric palladium(I) complex [Pd(μ-Br)(t)Bu(3)P](2) was found to possess unique activity for the catalytic double-bond migration within unsaturated compounds. This isomerization catalyst is fully compatible with state-of-the-art olefin metathesis catalysts. In the presence of bifunctional catalyst systems consisting of [Pd(μ-Br)(t)Bu(3)P](2) and NHC-indylidene ruthenium complexes, unsaturated compounds are continuously converted into equilibrium mixtures of double-bond isomers, which concurrently undergo catalytic olefin metathesis. Using such highly active catalyst systems, the isomerizing olefin metathesis becomes an efficient way to access defined distributions of unsaturated compounds from olefinic substrates. Computational models were designed to predict the outcome of such reactions. The synthetic utility of isomerizing metatheses is demonstrated by various new applications. Thus, the isomerizing self-metathesis of oleic and other fatty acids and esters provides olefins along with unsaturated mono- and dicarboxylates in distributions with adjustable widths. The cross-metathesis of two olefins with different chain lengths leads to regular distributions with a mean chain length that depends on the chain length of both starting materials and their ratio. The cross-metathesis of oleic acid with ethylene serves to access olefin blends with mean chain lengths below 18 carbons, while its analogous reaction with hex-3-enedioic acid gives unsaturated dicarboxylic acids with adjustable mean chain lengths as major products. Overall, the concept of isomerizing metatheses promises to open up new synthetic opportunities for the incorporation of oleochemicals as renewable feedstocks into the chemical value chain.

  15. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Prorocol (WAP) applications in medical information processing

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Dørup, Jens

    2001-01-01

    catalogue to Wireless Application Protocol using open source freeware at all steps. METHODS: We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language...

  16. Matching spatial with ontological brain regions using Java tools for visualization, database access, and integrated data analysis.

    NARCIS (Netherlands)

    Bezgin, G.; Reid, A.T.; Schubert, D.; Kotter, R.

    2009-01-01

    Brain atlases are widely used in experimental neuroscience as tools for locating and targeting specific brain structures. Delineated structures in a given atlas, however, are often difficult to interpret and to interface with database systems that supply additional information using hierarchically o

  17. Mars Global Digital Dune Database (MGD3): North polar region (MC-1) distribution, applications, and volume estimates

    Science.gov (United States)

    Hayward, R.K.

    2011-01-01

    The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.

  18. The final COS-B database now publicly available

    Science.gov (United States)

    Mayer-Hasselwander, H. A.; Bennett, K.; Bignami, G. F.; Bloemen, J. B. G. M.; Buccheri, R.; Caraveo, P. A.; Hermsen, W.; Kanbach, G.; Lebrun, F.; Paul, J. A.

    1985-01-01

    The data obtained by the gamma ray satellite COS-B was processed, condensed and integrated together with the relevant mission and experiment parameters into the Final COS-B Database. The database contents and the access programs available with the database are outlined. The final sky coverage and a presentation of the large scale distribution of the observed Milky Way emission are given. The database is announced to be available through the European Space Agency.

  19. Estimating species diversity and distribution in the era of Big Data: to what extent can we trust public databases?

    Science.gov (United States)

    Maldonado, Carla; Molina, Carlos I.; Zizka, Alexander; Persson, Claes; Taylor, Charlotte M.; Albán, Joaquina; Chilquillo, Eder; Antonelli, Alexandre

    2015-01-01

    Abstract Aim Massive digitalization of natural history collections is now leading to a steep accumulation of publicly available species distribution data. However, taxonomic errors and geographical uncertainty of species occurrence records are now acknowledged by the scientific community – putting into question to what extent such data can be used to unveil correct patterns of biodiversity and distribution. We explore this question through quantitative and qualitative analyses of uncleaned versus manually verified datasets of species distribution records across different spatial scales. Location The American tropics. Methods As test case we used the plant tribe Cinchoneae (Rubiaceae). We compiled four datasets of species occurrences: one created manually and verified through classical taxonomic work, and the rest derived from GBIF under different cleaning and filling schemes. We used new bioinformatic tools to code species into grids, ecoregions, and biomes following WWF's classification. We analysed species richness and altitudinal ranges of the species. Results Altitudinal ranges for species and genera were correctly inferred even without manual data cleaning and filling. However, erroneous records affected spatial patterns of species richness. They led to an overestimation of species richness in certain areas outside the centres of diversity in the clade. The location of many of these areas comprised the geographical midpoint of countries and political subdivisions, assigned long after the specimens had been collected. Main conclusion Open databases and integrative bioinformatic tools allow a rapid approximation of large‐scale patterns of biodiversity across space and altitudinal ranges. We found that geographic inaccuracy affects diversity patterns more than taxonomic uncertainties, often leading to false positives, i.e. overestimating species richness in relatively species poor regions. Public databases for species distribution are valuable and should be

  20. Airborne Network Data Availability Using Peer to Peer Database Replication on a Distributed Hash Table

    Science.gov (United States)

    2013-03-01

    AODV ) were used as the three routing protocols . All routing protocols were configured with the default values of their parameters. The...Optimized Link State Routing OSI Open Systems Interconnection P2P Peer-to-Peer PDP Peer Database Protocol SAR Spatially Aware Routing UAV Unmanned Aerial...concludes that one major aspect is the interaction between two routing systems: the ad-hoc routing protocol and the DHT routing algorithms. Since

  1. A service-oriented data access control model

    Science.gov (United States)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  2. Distributed SNR Estimation using Constant Modulus Signaling over Gaussian Multiple-Access Channels

    CERN Document Server

    Banavar, Mahesh K; Spanias, Andreas

    2011-01-01

    A sensor network is used for distributed joint mean and variance estimation, in a single time snapshot. Sensors observe a signal embedded in noise, which are phase modulated using a constant-modulus scheme and transmitted over a Gaussian multiple-access channel to a fusion center, where the mean and variance are estimated jointly, using an asymptotically minimum-variance estimator, which is shown to decouple into simple individual estimators of the mean and the variance. The constant-modulus phase modulation scheme ensures a fixed transmit power, robust estimation across several sensing noise distributions, as well as an SNR estimate that requires a single set of transmissions from the sensors to the fusion center, unlike the amplify-and-forward approach. The performance of the estimators of the mean and variance are evaluated in terms of asymptotic variance, which is used to evaluate the performance of the SNR estimator in the case of Gaussian, Laplace and Cauchy sensing noise distributions. For each sensing...

  3. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  4. Web Services-Based Access to Local Clinical Trial Databases: A Standards Initiative of the Association of American Cancer Institutes

    OpenAIRE

    Stahl, Douglas C.; Evans, Richard M.; Afrin, Lawrence B.; DeTeresa, Richard M.; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center’s locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date hav...

  5. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    Science.gov (United States)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  6. A novel distributed algorithm for media access control address assignment in wireless sensor networks

    Institute of Scientific and Technical Information of China (English)

    TIAN Ye; SHENG Min; LI Jiandong

    2007-01-01

    This Paper presents a novel distributed media access control(MAC)address assignment algorithm,namely virtual grid spatial reusing(VGSR),for wireless sensor networks,which reduces the size of the MAC address efficiently on the basis of both the spatial reuse of MAC address and the mapping of geographical position.By adjusting the communication range of sensor nodes,VGSR algorithm can minimize the size of MAC address and meanwhile guarantee the connectivity of the sensor network.Theoretical analysis and experimental results show that VGSR algorithm is not only of low energy cost,but also scales well with the network ize,with its performance superior to that of other existing algorithms.

  7. Design of a Distributed Personal Information Access Control Scheme for Secure Integrated Payment in NFC

    Directory of Open Access Journals (Sweden)

    Jungho Kang

    2015-06-01

    Full Text Available At the center of core technologies for a future cyber world, such as Internet of Things (IoT or big data, is a context-rich system that offers services by using situational information. The field where context-rich systems were first introduced is near-field communication (NFC-based electronic payments. Near-field Communication (NFC integrated payment services collect the payment information of the credit card and the location information to generate patterns in the user’s consumption or movement through big data technology. Based on such pattern information, tailored services, such as advertisement, are offered to users. However, there is difficulty in controlling access to personal information, as there is a collaborative relationship focused on the trusted service manager (TSM that is close knit to shared personal information. Moreover, in the case of Hadoop, among the many big data analytical technologies, it offers access control functions, but not a way to authorize the processing of personal information, making it impossible to grant authority between service providers to process information. As such, this paper proposes a key generation and distribution method, as well as a secure communication protocol. The analysis has shown that the efficiency was greater for security and performance compared to relation works.

  8. [Distribution of drinking water in French Guyana: issues and solutions for improving access].

    Science.gov (United States)

    Mansotte, François; Margueron, Thomas; Maison, Dominique

    2010-01-01

    French Guyana is located in South America, and it is confronted with an endemic situation where waterborne diseases are widespread, especially among those 30,000 people without access to drinking water. In 2007, two notices of the French High Council for Public Health were issued, one concerning vaccination against typhoid and the other on conditions for improving water supply in Guyana. The latter served as a basis for proposing and implementing actions to "improve water quality for those who did not have access to it". Some foundation for further action was provided due to actions developed during the 1991 cholera outbreak there, when hand pumps and fountains were installed, and rainwater collection was promoted at the household level. Top priority is given to water supply provided by public facilities, especially through hand pumps. Rainwater harvest and storage is promoted for remote and very isolated households, including tools for purification through the use of a Brazilian-made ceramic filter. Important challenges are identified for the further, such as: conducting an evaluation of those technical choices made, developing a social and cultural understanding of drinking water and sanitation among the users, distribution and training for the use of water quality test kits, data sharing and exchange of good practice with neighbouring countries and an accurate mapping of enteric disease cases recorded in local health facilities.

  9. Reconfigurable radio access unit to dynamically distribute W-band signals in 5G wireless access networks

    DEFF Research Database (Denmark)

    Rodríguez Páez, Juan Sebastián; Rommel, Simon; Vegas Olmos, Juan José

    2017-01-01

    In this paper a new type of radio access unit is proposed and demonstrated. This unit is composed only of the reduced amount of components (compared to conventional unit designs) to optically generate wireless signals on the W-band (75–110 GHz) in combination with a switching system. The proposed...... system not only achieves BER values below the FEC limit, but gives an extra level of flexibility to the network by easing the redirection of the signal to different antennas....

  10. Experimental access to transition distribution amplitudes with the PANDA experiment at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Zambrana, Manuel; Ahmed, Samer; Deiseroth, Malte; Froehlich, Bertold; Khaneft, Dmitry; Lin, Dexu; Noll, Oliver; Valente, Roserio; Zimmermann, Iris [Institut fuer Kernphysik, Johannes Gutenberg Universitaet, Mainz (Germany); Helmholtz-Institut Mainz (Germany); Mora Espi, Maria Carmen; Ahmadi, Heybat; Capozza, Luigi; Dbeyssi, Alaa; Morales, Cristina; Rodriguez Pineiro, David [Helmholtz-Institut Mainz (Germany); Maas, Frank [Institut fuer Kernphysik, Johannes Gutenberg Universitaet, Mainz (Germany); Helmholtz-Institut Mainz (Germany); Prisma Cluster of Excellence, Mainz (Germany)

    2015-07-01

    We address the feasibility of accessing proton to pion Transition Distribution Amplitudes with the future PANDA detector at the FAIR facility. At high center of mass energy and four-momentum transfer, the amplitude of signal channel anti pp → e{sup +}e{sup -}π{sup 0} admits a QCD factorized description in terms of Distribution Amplitudes and Transition Distribution Amplitudes in the forward and backward regions. Assuming a factorized cross section, feasibility studies of measuring anti pp → e{sup +}e{sup -}π{sup 0} with PANDA have been performed at the center of mass energy squared s=5 GeV{sup 2} and s=10 GeV{sup 2}, in the kinematic region of four-momentum transfer 3.0 < q{sup 2} < 4.3 GeV{sup 2} and 5 < q{sup 2} < 9 GeV{sup 2}, respectively, with a neutral pion scattered in the forward or backward cone cosθ{sub π{sup 0}} > 0.5 in the anti pp center of mass frame. These include detailed simulations on signal reconstruction efficiency, rejection of the most severe background channel, i.e. anti pp → π{sup +}π{sup -}π{sup 0}, and the feasibility of the measurement using a sample of 2 fb{sup -1} of integrated luminosity. Results of the simulations show that a background rejection factor from 10{sup 7} at s=5 GeV{sup 2} to 10{sup 8} at s=10 GeV{sup 2} can be achieved, while keeping the signal reconstruction efficiency at the level of 40%, and that a clean lepton signal can be reconstructed with 2 fb{sup -1} of integrated luminosity at both energies. The ''measured'' cross sections with the simulations are used to test QCD factorization at the leading order by measuring scaling laws and fitting angular distributions.

  11. FishTraits Database

    Science.gov (United States)

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  12. The GEISA 2009 Spectroscopic Database System and its CNES/CNRS Ether Products and Services Center Interactive Distribution

    Science.gov (United States)

    Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain; Boonne, Cathy; Poulet-Crovisier, Nathalie

    2010-05-01

    The GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France) a system comprising three independent sub-databases devoted respectively to : line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. The updated 2009 edition (GEISA-09) archives, in its line transition parameters sub-section, 50 molecules, corresponding to 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Detailed description of the whole database contents will be documented. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. These facilities will be described and widely illustrated, as well. Interactive demonstrations will be given if technical possibilities are feasible at the time of the Poster Display Session. More than 350 researchers are registered for on line use of GEISA on Ether. Currently, GEISA is involved in activities (2) related to the remote sensing of the terrestrial atmosphere thanks to the sounding performances of new generation of hyperspectral Earth' atmospheric sounders, like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle

  13. "Utstein style" spreadsheet and database programs based on Microsoft Excel and Microsoft Access software for CPR data management of in-hospital resuscitation.

    Science.gov (United States)

    Adams, Bruce D; Whitlock, Warren L

    2004-04-01

    In 1997, The American Heart Association in association with representatives of the International Committee on Resuscitation (ILCOR) published recommended guidelines for reviewing, reporting and conducting in-hospital cardiopulmonary resuscitation (CPR) outcomes using the "Utstein style". Using these guidelines, we developed two Microsoft Office based database management programs that may be useful to the resuscitation community. We developed a user-friendly spreadsheet based on MS Office Excel. The user enters patient variables such as name, age, and diagnosis. Then, event resuscitation variables such as time of collapse and CPR team arrival are entered from a "code flow sheet". Finally, outcome variables such as patient condition at different time points are recorded. The program then makes automatic calculations of average response times, survival rates and other important outcome measurements. Also using the Utstein style, we developed a database program based on MS Office Access. To promote free public access to these programs, we established at a website. These programs will help hospitals track, analyze, and present their CPR outcomes data. Clinical CPR researchers might also find the programs useful because they are easily modified and have statistical functions.

  14. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Prorocol (WAP) applications in medical information processing

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Dørup, Jens

    2001-01-01

    catalogue to Wireless Application Protocol using open source freeware at all steps. METHODS: We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language...... number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools...

  15. Urate levels predict survival in amyotrophic lateral sclerosis: Analysis of the expanded Pooled Resource Open-Access ALS clinical trials database.

    Science.gov (United States)

    Paganoni, Sabrina; Nicholson, Katharine; Chan, James; Shui, Amy; Schoenfeld, David; Sherman, Alexander; Berry, James; Cudkowicz, Merit; Atassi, Nazem

    2017-08-31

    Urate has been identified as a predictor of amyotrophic lateral sclerosis (ALS) survival in some but not all studies. Here we leverage the recent expansion of the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database to study the association between urate levels and ALS survival. Pooled data of 1,736 ALS participants from the PRO-ACT database were analyzed. Cox proportional hazards regression models were used to evaluate associations between urate levels at trial entry and survival. After adjustment for potential confounders (i.e., creatinine and body mass index), there was an 11% reduction in risk of reaching a survival endpoint during the study with each 1-mg/dL increase in uric acid levels (adjusted hazard ratio 0.89, 95% confidence interval 0.82-0.97, P < 0.01). Our pooled analysis provides further support for urate as a prognostic factor for survival in ALS and confirms the utility of the PRO-ACT database as a powerful resource for ALS epidemiological research. Muscle Nerve 2017. © 2017 Wiley Periodicals, Inc.

  16. TRY 3.0 - a substantial upgrade of the global database of plant traits: more data, more species, largely open access

    Science.gov (United States)

    Kattge, Jens; Díaz, Sandra; Lavorel, Sandra; Prentice, Ian Colin; Leadley, Paul; Boenisch, Gerhard; Wirth, Christian; TRY Consortium, The

    2015-04-01

    Plant traits determine how primary producers respond to environmental factors, affect other trophic levels, influence ecosystem processes and services, and provide a link from species richness to ecosystem functional diversity. Plant traits thus are a key to understand and predict the adaptation of ecosystems to environmental changes. At the same time ground based measurements of plant trait data are dispersed over a wide range of databases, many of these not publicly available. To overcome this deficiency IGBP and DIVERSITAS have initiated the development of a joint database, called TRY, aiming at constructing a standard resource of ground based plant trait observations for the ecological community and for the development of global vegetation models. The new version of the global database of plant traits - TRY 3.0 - provides substantially improved information on plant traits: 5.6 million trait records for about 100.000 of the worlds 350.000 plant species. More than 50% of the trait records are open access. In combination with recent improvements in gap-filling of sparse trait matrices (e.g., Bayesian Hierarchical Probabilistic Matrix Factrization; see abstract 15696 by Farideh Fazayeli) the new version of TRY provides the opportunity to derive a filled matrix of plant trait estimates for an unprecedented number of traits and species. We expect that this data richness will facilitate qualitatively new analyses and applications of plant traits (e.g., abstract 15724 by Oliver Purschke).

  17. Experimental access to Transition Distribution Amplitudes with the \\={P}ANDA experiment at FAIR

    CERN Document Server

    Singh, B P; Keshelashvili, I; Krusche, B; Steinacher, M; Liu, B; Liu, H; Liu, Z; Shen, X; Wang, C; Zhao, J; Albrecht, M; Fink, M; Heinsius, F H; Held, T; Holtmann, T; Koch, H; Kopf, B; Kümmel, M; Kuhl, G; Kuhlmann, M; Leyhe, M; Mikirtychyants, M; Musiol, P; Mustafa, A; Pelizäus, M; Pychy, J; Richter, M; Schnier, C; Schröder, T; Sowa, C; Steinke, M; Triffterer, T; Wiedner, U; Beck, R; Hammann, C; Kaiser, D; Ketzer, B; Kube, M; Mahlberg, P; Rossbach, M; Schmidt, C; Schmitz, R; Thoma, U; Walther, D; Wendel, C; Wilson, A; Bianconi, A; Bragadireanu, M; Caprini, M; Pantea, D; Pietreanu, D; Vasile, M E; Patel, B; Kaplan, D; Brandys, P; Czyzewski, T; Czyzycki, W; Domagala, M; Hawryluk, M; Filo, G; Krawczyk, M; Kwiatkowski, D; Lisowski, E; Lisowski, F; Fiutowski, T; Idzik, M; Mindur, B; Przyborowski, D; Swientek, K; Czech, B; Kliczewski, S; Korcyl, K; Kozela, A; Kulessa, P; Lebiedowicz, P; Malgorzata, K; Pysz, K; Schäfer, W; Siudak, R; Szczurek, A; Biernat, J; Jowzaee, S; Kamys, B; Kistryn, S; Korcyl, G; Krzemien, W; Magiera, A; Moskal, P; Palka, M; Psyzniak, A; Rudy, Z; Salabura, P; Smyrski, J; Strzempek, P; Wrońska, A; Augustin, I; Lehmann, I; Nicmorus, D; Schepers, G; Schmitt, L; Al-Turany, M; Cahit, U; Capozza, L; Dbeyssi, A; Deppe, H; Dzhygadlo, R; Ehret, A; Flemming, H; Gerhardt, A; Götzen, K; Karabowicz, R; Kliemt, R; Kunkel, J; Kurilla, U; Lehmann, D; Lühning, J; Maas, F; Morales, C Morales; Espí, M C Mora; Nerling, F; Orth, H; Peters, K; Piñeiro, D Rodríguez; Saito, N; Saito, T; Lorente, A Sánchez; Schmidt, C J; Schwarz, C; Schwiening, J; Traxler, M; Valente, R; Voss, B; Wieczorek, P; Wilms, A; Zühlsdorf, M; Abazov, V M; Alexeev, G; Arefiev, A; Astakhov, V I; Barabanov, M Yu; Batyunya, B V; Davydov, Yu I; Dodokhov, V Kh; Efremov, A A; Fedunov, A G; Festchenko, A A; Galoyan, A S; Grigoryan, S; Karmokov, A; Koshurnikov, E K; Lobanov, V I; Lobanov, Yu Yu; Makarov, A F; Malinina, L V; Malyshev, V L; Mustafaev, G A; Olshevskiy, A; Pasyuk, M A; Perevalova, E A; Piskun, A A; Pocheptsov, T A; Pontecorvo, G; Rodionov, V K; Rogov, Yu N; Salmin, R A; Samartsev, A G; Sapozhnikov, M G; Shabratova, G S; Skachkov, N B; Skachkova, A N; Strokovsky, E A; Suleimanov, M K; Teshev, R Sh; Tokmenin, V V; Uzhinsky, V V; Vodopyanov, A S; Zaporozhets, S A; Zhuravlev, N I; Zorin, A G; Branford, D; Glazier, D; Watts, D; Woods, P; Britting, A; Eyrich, W; Lehmann, A; Uhlig, F; Dobbs, S; Seth, K; Tomaradze, A; Xiao, T; Bettoni, D; Carassiti, V; Ramusino, A Cotta; Dalpiaz, P; Drago, A; Fioravanti, E; Garzia, I; Savriè, M; Stancari, G; Akishina, V; Kisel, I; Kulakov, I; Zyzak, M; Arora, R; Bel, T; Gromliuk, A; Kalicy, G; Krebs, M; Patsyuk, M; Zuehlsdorf, M; Bianchi, N; Gianotti, P; Guaraldo, C; Lucherini, V; Pace, E; Bersani, A; Bracco, G; Macri, M; Parodi, R F; Bianco, S; Bremer, D; Brinkmann, K T; Diehl, S; Dormenev, V; Drexler, P; Düren, M; Eissner, T; Etzelmüller, E; Föhl, K; Galuska, M; Gessler, T; Gutz, E; Hayrapetyan, A; Hu, J; Kröck, B; Kühn, W; Kuske, T; Lange, S; Liang, Y; Merle, O; Metag, V; Mülhheim, D; Münchow, D; Nanova, M; Novotny, R; Pitka, A; Quagli, T; Rieke, J; Rosenbaum, C; Schnell, R; Spruck, B; Stenzel, H; Thöring, U; Ullrich, T; Wasem, T; Werner, M; Zaunick, H G; Ireland, D; Rosner, G; Seitz, B; Deepak, P N; Kulkarni, A V; Apostolou, A; Babai, M; Kavatsyuk, M; Lemmens, P; Lindemulder, M; Löhner, H; Messchendorp, J; Schakel, P; Smit, H; van der Weele, J C; Veenstra, R; Tiemens, M; Vejdani, S; Kalita, K; Mohanta, D P; Kumar, A; Roy, A; Sahoo, R; Sohlbach, H; Büscher, M; Cao, L; Cebulla, A; Deermann, D; Dosdall, R; Esch, S; Georgadze, I; Gillitzer, A; Goerres, A; Goldenbaum, F; Grunwald, D; Herten, A; Hu, Q; Kemmerling, G; Kleines, H; Kozlov, V; Lehrach, A; Leiber, S; Maier, R; Nellen, R; Ohm, H; Orfanitski, S; Prasuhn, D; Prencipe, E; Ritman, J; Schadmand, S; Schumann, J; Sefzick, T; Serdyuk, V; Sterzenbach, G; Stockmanns, T; Wintz, P; Wüstner, P; Xu, H; Li, S; Li, Z; Sun, Z; Rigato, V; Fissum, S; Hansen, K; Isaksson, L; Lundin, M; Schröder, B; Achenbach, P; Bleser, S; Cardinali, M; Corell, O; Deiseroth, M; Denig, A; Distler, M; Feldbauer, F; Fritsch, M; Jasinski, P; Hoek, M; Kangh, D; Karavdina, A; Lauth, W; Leithoff, H; Merkel, H; Michel, M; Motzko, C; Müller, U; Noll, O; Plueger, S; Pochodzalla, J; Sanchez, S; Schlimme, S; Sfienti, C; Steinen, M; Thiel, M; Weber, T; Zambrana, M; Dormenev, V I; Fedorov, A A; Korzihik, M V; Missevitch, O V; Balanutsa, P; Balanutsa, V; Chernetsky, V; Demekhin, A; Dolgolenko, A; Fedorets, P; Gerasimov, A; Goryachev, V; Varentsov, V; Boukharov, A; Malyshev, O; Marishev, I; Semenov, A; Konorov, I; Paul, S; Grieser, S; Hergemöller, A K; Khoukaz, A; Köhler, E; Täschner, A; Wessels, J; Dash, S; Jadhav, M; Kumar, S; Sarin, P; Varma, R; Chandratre, V B; Datar, V; Dutta, D; Jha, V; Kumawat, H; Mohanty, A K; Roy, B; Yan, Y; Chinorat, K; Khanchai, K; Ayut, L; Pornrad, S; Barnyakov, A Y; Blinov, A E; Blinov, V E; Bobrovnikov, V S; Kononov, S A; Kravchenko, E A; Kuyanov, I A; Onuchin, A P; Sokolov, A A; Tikhonov, Y A; Atomssa, E; Hennino, T; Imre, M; Kunne, R; Galliard, C Le; Ma, B; Marchand, D; Ong, S; Ramstein, B; Rosier, P; Tomasi-Gustafsson, E; Van de Wiele, J; Boca, G; Costanza, S; Genova, P; Lavezzi, L; Montagna, P; Rotondi, A; Abramov, V; Belikov, N; Bukreeva, S; Davidenko, A; Derevschikov, A; Goncharenko, Y; Grishin, V; Kachanov, V; Kormilitsin, V; Melnik, Y; Levin, A; Minaev, N; Mochalov, V; Morozov, D; Nogach, L; Poslavskiy, S; Ryazantsev, A; Ryzhikov, S; Semenov, P; Shein, I; Uzunian, A; Vasiliev, A; Yakutin, A; Yabsley, B; Bäck, T; Cederwall, B; Makónyi, K; Tegnér, P E; von Würtemberg, K M; Belostotski, S; Gavrilov, G; Izotov, A; Kashchuk, A; Levitskaya, O; Manaenkov, S; Miklukho, O; Naryshkin, Y; Suvorov, K; Veretennikov, D; Zhadanov, A; Rai, A K; Godre, S S; Duchat, R; Amoroso, A; Bussa, M P; Busso, L; De Mori, F; Destefanis, M; Fava, L; Ferrero, L; Greco, M; Maggiora, M; Maniscalco, G; Marcello, S; Sosio, S; Spataro, S; Zotti, L; Calvo, D; Coli, S; De Remigis, P; Filippi, A; Giraudo, G; Lusso, S; Mazza, G; Mingnore, M; Rivetti, A; Wheadon, R; Balestra, F; Iazzi, F; Introzzi, R; Lavagno, A; Younis, H; Birsa, R; Bradamante, F; Bressan, A; Martin, A; Clement, H; Gålnander, B; Balkeståhl, L Caldeira; Calén, H; Fransson, K; Johansson, T; Kupsc, A; Marciniewski, P; Pettersson, J; Schönning, K; Wolke, M; Zlomanczuk, J; Díaz, J; Ortiz, A; Vinodkumar, P C; Parmar, A; Chlopik, A; Melnychuk, D; Slowinski, B; Trzcinski, A; Wojciechowski, M; Wronka, S; Zwieglinski, B; Bühler, P; Marton, J; Suzuki, K; Widmann, E; Zmeskal, J; Fröhlich, B; Khaneft, D; Lin, D; Zimmermann, I; Semenov-Tian-Shansky, K

    2014-01-01

    We address the possibility of accessing nucleon-to-pion ($\\pi N$) Transition Distribution Amplitudes (TDAs) from $\\bar{p}p \\to e^+e^- \\pi^0$ reaction with the future \\={P}ANDA detector at the FAIR facility. At high center of mass energy and high invariant mass of the lepton pair $q^2$, the amplitude of the signal channel $\\bar{p}p \\to e^+e^- \\pi^0$ admits a QCD factorized description in terms of $\\pi N$ TDAs and nucleon Distribution Amplitudes (DAs) in the forward and backward kinematic regimes. Assuming the validity of this factorized description, we perform feasibility studies for measuring $\\bar{p}p \\to e^+e^- \\pi^0$ with the \\={P}ANDA detector. Detailed simulations on signal reconstruction efficiency as well as on rejection of the most severe background channel, {\\it i.e.} $\\bar{p}p \\to \\pi^+\\pi^- \\pi^0$ were performed for the center of mass energy squared $s = 5$ GeV$^2$ and $s = 10$ GeV$^2$, in the kinematic regions $3.0 0.5 $ in the proton-antiproton center of mass frame. Results of the simulation sho...

  18. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Directory of Open Access Journals (Sweden)

    Danish Shehzad

    2016-01-01

    Full Text Available Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  19. Data access and analysis with distributed federated data servers in climateprediction.net

    Directory of Open Access Journals (Sweden)

    N. Massey

    2006-01-01

    Full Text Available climateprediction.net is a large public resource distributed scientific computing project. Members of the public download and run a full-scale climate model, donate their computing time to a large perturbed physics ensemble experiment to forecast the climate in the 21st century and submit their results back to the project. The amount of data generated is large, consisting of tens of thousands of individual runs each in the order of tens of megabytes. The overall dataset is, therefore, in the order of terabytes. Access and analysis of the data is further complicated by the reliance on donated, distributed, federated data servers. This paper will discuss the problems encountered when the data required for even a simple analysis is spread across several servers and how webservice technology can be used; how different user interfaces with varying levels of complexity and flexibility can be presented to the application scientists, how using existing web technologies such as HTTP, SOAP, XML, HTML and CGI can engender the reuse of code across interfaces; and how application scientists can be notified of their analysis' progress and results in an asynchronous architecture.

  20. Evaluation of Waterloss Impacts on Water Distribution and Accessibility in Akure, Nigeria

    Directory of Open Access Journals (Sweden)

    Olotu Yahaya

    2014-07-01

    Full Text Available Safe drinking water is a necessity for life. Providing quality drinking water is a critical service that generates revenues for water utilities to sustain their operations. Population growth put an additional strain on the limited resources. The annual volume of water lost is an important indicator of water distribution efficiency, both in individual years, and as a trend over a period of years. Application of deterministic simulation model on public water supply variables reveals the volume of nonrevenue water (NRW and its cost effects have further created a complex system for the availability, distribution and affordability of the utility. Gradual annual increase in public water supply (AWS from 9.0 *106m 3 to 14.4 * 106m 3 had negative effect on annual water accessed (AWA with R 2 = 0.096; and highly significant with annual water loss (AWL with R 2 = 0.99. This development indicates that water loss mainly through leakages and bursts is a function of public water supply. Hence, estimated volume and cost annual revenue water (NRW in Akure is 6 million m3 and 15.6 million USD respectively. Critical analysis shows that the lost annual revenue could be used to provide education and health services for a period of 6-month in the region.

  1. Immigration, language proficiency, and autobiographical memories: Lifespan distribution and second-language access.

    Science.gov (United States)

    Esposito, Alena G; Baker-Ward, Lynne

    2016-08-01

    This investigation examined two controversies in the autobiographical literature: how cross-language immigration affects the distribution of autobiographical memories across the lifespan and under what circumstances language-dependent recall is observed. Both Spanish/English bilingual immigrants and English monolingual non-immigrants participated in a cue word study, with the bilingual sample taking part in a within-subject language manipulation. The expected bump in the number of memories from early life was observed for non-immigrants but not immigrants, who reported more memories for events surrounding immigration. Aspects of the methodology addressed possible reasons for past discrepant findings. Language-dependent recall was influenced by second-language proficiency. Results were interpreted as evidence that bilinguals with high second-language proficiency, in contrast to those with lower second-language proficiency, access a single conceptual store through either language. The final multi-level model predicting language-dependent recall, including second-language proficiency, age of immigration, internal language, and cue word language, explained ¾ of the between-person variance and (1)/5 of the within-person variance. We arrive at two conclusions. First, major life transitions influence the distribution of memories. Second, concept representation across multiple languages follows a developmental model. In addition, the results underscore the importance of considering language experience in research involving memory reports.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  3. Experimental access to Transition Distribution Amplitudes with the P¯ANDA experiment at FAIR

    Science.gov (United States)

    Singh, B. P.; Erni, W.; Keshelashvili, I.; Krusche, B.; Steinacher, M.; Liu, B.; Liu, H.; Liu, Z.; Shen, X.; Wang, C.; Zhao, J.; Albrecht, M.; Fink, M.; Heinsius, F. H.; Held, T.; Holtmann, T.; Koch, H.; Kopf, B.; Kümmel, M.; Kuhl, G.; Kuhlmann, M.; Leyhe, M.; Mikirtychyants, M.; Musiol, P.; Mustafa, A.; Pelizäus, M.; Pychy, J.; Richter, M.; Schnier, C.; Schröder, T.; Sowa, C.; Steinke, M.; Triffterer, T.; Wiedner, U.; Beck, R.; Hammann, C.; Kaiser, D.; Ketzer, B.; Kube, M.; Mahlberg, P.; Rossbach, M.; Schmidt, C.; Schmitz, R.; Thoma, U.; Walther, D.; Wendel, C.; Wilson, A.; Bianconi, A.; Bragadireanu, M.; Caprini, M.; Pantea, D.; Pietreanu, D.; Vasile, M. E.; Patel, B.; Kaplan, D.; Brandys, P.; Czyzewski, T.; Czyzycki, W.; Domagala, M.; Hawryluk, M.; Filo, G.; Krawczyk, M.; Kwiatkowski, D.; Lisowski, E.; Lisowski, F.; Fiutowski, T.; Idzik, M.; Mindur, B.; Przyborowski, D.; Swientek, K.; Czech, B.; Kliczewski, S.; Korcyl, K.; Kozela, A.; Kulessa, P.; Lebiedowicz, P.; Malgorzata, K.; Pysz, K.; Schäfer, W.; Siudak, R.; Szczurek, A.; Biernat, J.; Jowzaee, S.; Kamys, B.; Kistryn, S.; Korcyl, G.; Krzemien, W.; Magiera, A.; Moskal, P.; Palka, M.; Psyzniak, A.; Rudy, Z.; Salabura, P.; Smyrski, J.; Strzempek, P.; Wrońska, A.; Augustin, I.; Lehmann, I.; Nicmorus, D.; Schepers, G.; Schmitt, L.; Al-Turany, M.; Cahit, U.; Capozza, L.; Dbeyssi, A.; Deppe, H.; Dzhygadlo, R.; Ehret, A.; Flemming, H.; Gerhardt, A.; Götzen, K.; Karabowicz, R.; Kliemt, R.; Kunkel, J.; Kurilla, U.; Lehmann, D.; Lühning, J.; Maas, F.; Morales Morales, C.; Mora Espí, M. C.; Nerling, F.; Orth, H.; Peters, K.; Rodríguez Piñeiro, D.; Saito, N.; Saito, T.; Sánchez Lorente, A.; Schmidt, C. J.; Schwarz, C.; Schwiening, J.; Traxler, M.; Valente, R.; Voss, B.; Wieczorek, P.; Wilms, A.; Zühlsdorf, M.; Abazov, V. M.; Alexeev, G.; Arefiev, A.; Astakhov, V. I.; Barabanov, M. Yu.; Batyunya, B. V.; Davydov, Yu. I.; Dodokhov, V. Kh.; Efremov, A. A.; Fedunov, A. G.; Festchenko, A. A.; Galoyan, A. S.; Grigoryan, S.; Karmokov, A.; Koshurnikov, E. K.; Lobanov, V. I.; Lobanov, Yu. Yu.; Makarov, A. F.; Malinina, L. V.; Malyshev, V. L.; Mustafaev, G. A.; Olshevskiy, A.; Pasyuk, M. A.; Perevalova, E. A.; Piskun, A. A.; Pocheptsov, T. A.; Pontecorvo, G.; Rodionov, V. K.; Rogov, Yu. N.; Salmin, R. A.; Samartsev, A. G.; Sapozhnikov, M. G.; Shabratova, G. S.; Skachkov, N. B.; Skachkova, A. N.; Strokovsky, E. A.; Suleimanov, M. K.; Teshev, R. Sh.; Tokmenin, V. V.; Uzhinsky, V. V.; Vodopyanov, A. S.; Zaporozhets, S. A.; Zhuravlev, N. I.; Zorin, A. G.; Branford, D.; Glazier, D.; Watts, D.; Woods, P.; Britting, A.; Eyrich, W.; Lehmann, A.; Uhlig, F.; Dobbs, S.; Seth, K.; Tomaradze, A.; Xiao, T.; Bettoni, D.; Carassiti, V.; Cotta Ramusino, A.; Dalpiaz, P.; Drago, A.; Fioravanti, E.; Garzia, I.; Savriè, M.; Stancari, G.; Akishina, V.; Kisel, I.; Kulakov, I.; Zyzak, M.; Arora, R.; Bel, T.; Gromliuk, A.; Kalicy, G.; Krebs, M.; Patsyuk, M.; Zuehlsdorf, M.; Bianchi, N.; Gianotti, P.; Guaraldo, C.; Lucherini, V.; Pace, E.; Bersani, A.; Bracco, G.; Macri, M.; Parodi, R. F.; Bianco, S.; Bremer, D.; Brinkmann, K. T.; Diehl, S.; Dormenev, V.; Drexler, P.; Düren, M.; Eissner, T.; Etzelmüller, E.; Föhl, K.; Galuska, M.; Gessler, T.; Gutz, E.; Hayrapetyan, A.; Hu, J.; Kröck, B.; Kühn, W.; Kuske, T.; Lange, S.; Liang, Y.; Merle, O.; Metag, V.; Mülhheim, D.; Münchow, D.; Nanova, M.; Novotny, R.; Pitka, A.; Quagli, T.; Rieke, J.; Rosenbaum, C.; Schnell, R.; Spruck, B.; Stenzel, H.; Thöring, U.; Ullrich, M.; Wasem, T.; Werner, M.; Zaunick, H. G.; Ireland, D.; Rosner, G.; Seitz, B.; Deepak, P. N.; Kulkarni, A. V.; Apostolou, A.; Babai, M.; Kavatsyuk, M.; Lemmens, P.; Lindemulder, M.; Löhner, H.; Messchendorp, J.; Schakel, P.; Smit, H.; van der Weele, J. C.; Tiemens, M.; Veenstra, R.; Vejdani, S.; Kalita, K.; Mohanta, D. P.; Kumar, A.; Roy, A.; Sahoo, R.; Sohlbach, H.; Büscher, M.; Cao, L.; Cebulla, A.; Deermann, D.; Dosdall, R.; Esch, S.; Georgadze, I.; Gillitzer, A.; Goerres, A.; Goldenbaum, F.; Grunwald, D.; Herten, A.; Hu, Q.; Kemmerling, G.; Kleines, H.; Kozlov, V.; Lehrach, A.; Leiber, S.; Maier, R.; Nellen, R.; Ohm, H.; Orfanitski, S.; Prasuhn, D.; Prencipe, E.; Ritman, J.; Schadmand, S.; Schumann, J.; Sefzick, T.; Serdyuk, V.; Sterzenbach, G.; Stockmanns, T.; Wintz, P.; Wüstner, P.; Xu, H.; Li, S.; Li, Z.; Sun, Z.; Xu, H.; Rigato, V.; Fissum, S.; Hansen, K.; Isaksson, L.; Lundin, M.; Schröder, B.; Achenbach, P.; Bleser, S.; Cardinali, M.; Corell, O.; Deiseroth, M.; Denig, A.; Distler, M.; Feldbauer, F.; Fritsch, M.; Jasinski, P.; Hoek, M.; Kangh, D.; Karavdina, A.; Lauth, W.; Leithoff, H.; Merkel, H.; Michel, M.; Motzko, C.; Müller, U.; Noll, O.; Plueger, S.; Pochodzalla, J.; Sanchez, S.; Schlimme, S.; Sfienti, C.; Steinen, M.; Thiel, M.; Weber, T.; Zambrana, M.; Dormenev, V. I.; Fedorov, A. A.; Korzihik, M. V.; Missevitch, O. V.; Balanutsa, P.; Balanutsa, V.; Chernetsky, V.; Demekhin, A.; Dolgolenko, A.; Fedorets, P.; Gerasimov, A.; Goryachev, V.; Varentsov, V.; Boukharov, A.; Malyshev, O.; Marishev, I.; Semenov, A.; Konorov, I.; Paul, S.; Grieser, S.; Hergemöller, A. K.; Khoukaz, A.; Köhler, E.; Täschner, A.; Wessels, J.; Dash, S.; Jadhav, M.; Kumar, S.; Sarin, P.; Varma, R.; Chandratre, V. B.; Datar, V.; Dutta, D.; Jha, V.; Kumawat, H.; Mohanty, A. K.; Roy, B.; Yan, Y.; Chinorat, K.; Khanchai, K.; Ayut, L.; Pornrad, S.; Barnyakov, A. Y.; Blinov, A. E.; Blinov, V. E.; Bobrovnikov, V. S.; Kononov, S. A.; Kravchenko, E. A.; Kuyanov, I. A.; Onuchin, A. P.; Sokolov, A. A.; Tikhonov, Y. A.; Atomssa, E.; Hennino, T.; Imre, M.; Kunne, R.; Le Galliard, C.; Ma, B.; Marchand, D.; Ong, S.; Ramstein, B.; Rosier, P.; Tomasi-Gustafsson, E.; Van de Wiele, J.; Boca, G.; Costanza, S.; Genova, P.; Lavezzi, L.; Montagna, P.; Rotondi, A.; Abramov, V.; Belikov, N.; Bukreeva, S.; Davidenko, A.; Derevschikov, A.; Goncharenko, Y.; Grishin, V.; Kachanov, V.; Kormilitsin, V.; Melnik, Y.; Levin, A.; Minaev, N.; Mochalov, V.; Morozov, D.; Nogach, L.; Poslavskiy, S.; Ryazantsev, A.; Ryzhikov, S.; Semenov, P.; Shein, I.; Uzunian, A.; Vasiliev, A.; Yakutin, A.; Yabsley, B.; Bäck, T.; Cederwall, B.; Makónyi, K.; Tegnér, P. E.; von Würtemberg, K. M.; Belostotski, S.; Gavrilov, G.; Izotov, A.; Kashchuk, A.; Levitskaya, O.; Manaenkov, S.; Miklukho, O.; Naryshkin, Y.; Suvorov, K.; Veretennikov, D.; Zhadanov, A.; Rai, A. K.; Godre, S. S.; Duchat, R.; Amoroso, A.; Bussa, M. P.; Busso, L.; De Mori, F.; Destefanis, M.; Fava, L.; Ferrero, L.; Greco, M.; Maggiora, M.; Maniscalco, G.; Marcello, S.; Sosio, S.; Spataro, S.; Zotti, L.; Calvo, D.; Coli, S.; De Remigis, P.; Filippi, A.; Giraudo, G.; Lusso, S.; Mazza, G.; Mingnore, M.; Rivetti, A.; Wheadon, R.; Balestra, F.; Iazzi, F.; Introzzi, R.; Lavagno, A.; Younis, H.; Birsa, R.; Bradamante, F.; Bressan, A.; Martin, A.; Clement, H.; Gålnander, B.; Caldeira Balkeståhl, L.; Calén, H.; Fransson, K.; Johansson, T.; Kupsc, A.; Marciniewski, P.; Pettersson, J.; Schönning, K.; Wolke, M.; Zlomanczuk, J.; Díaz, J.; Ortiz, A.; Vinodkumar, P. C.; Parmar, A.; Chlopik, A.; Melnychuk, D.; Slowinski, B.; Trzcinski, A.; Wojciechowski, M.; Wronka, S.; Zwieglinski, B.; Bühler, P.; Marton, J.; Suzuki, K.; Widmann, E.; Zmeskal, J.; Fröhlich, B.; Khaneft, D.; Lin, D.; Zimmermann, I.; Semenov-Tian-Shansky, K.

    2015-08-01

    Baryon-to-meson Transition Distribution Amplitudes (TDAs) encoding valuable new information on hadron structure appear as building blocks in the collinear factorized description for several types of hard exclusive reactions. In this paper, we address the possibility of accessing nucleon-to-pion ( πN) TDAs from reaction with the future P¯ANDA detector at the FAIR facility. At high center-of-mass energy and high invariant mass squared of the lepton pair q 2, the amplitude of the signal channel admits a QCD factorized description in terms of πN TDAs and nucleon Distribution Amplitudes (DAs) in the forward and backward kinematic regimes. Assuming the validity of this factorized description, we perform feasibility studies for measuring with the P¯ANDA detector. Detailed simulations on signal reconstruction efficiency as well as on rejection of the most severe background channel, i.e. were performed for the center-of-mass energy squared s = 5 GeV2 and s = 10 GeV2, in the kinematic regions 3.0 < q 2 < 4.3 GeV2 and 5 < q 2 GeV2, respectively, with a neutral pion scattered in the forward or backward cone in the proton-antiproton center-of-mass frame. Results of the simulation show that the particle identification capabilities of the P¯ANDA detector will allow to achieve a background rejection factor of 5 · 107 (1 · 107) at low (high) q 2 for s = 5 GeV2, and of 1 · 108 (6 · 106) at low (high) q 2 for s = 10 GeV2, while keeping the signal reconstruction efficiency at around 40%. At both energies, a clean lepton signal can be reconstructed with the expected statistics corresponding to 2 fb-1 of integrated luminosity. The cross sections obtained from the simulations are used to show that a test of QCD collinear factorization can be done at the lowest order by measuring scaling laws and angular distributions. The future measurement of the signal channel cross section with P¯ANDA will provide a new test of the perturbative QCD description of a novel class of hard

  4. Mars Global Digital Dune Database (MGD3): Global dune distribution and wind pattern observations

    Science.gov (United States)

    Hayward, Rosalyn K.; Fenton, Lori; Titus, Timothy N.

    2014-01-01

    The Mars Global Digital Dune Database (MGD3) is complete and now extends from 90°N to 90°S latitude. The recently released south pole (SP) portion (MC-30) of MGD3 adds ∼60,000 km2 of medium to large-size dark dune fields and ∼15,000 km2 of sand deposits and smaller dune fields to the previously released equatorial (EQ, ∼70,000 km2), and north pole (NP, ∼845,000 km2) portions of the database, bringing the global total to ∼975,000 km2. Nearly all NP dunes are part of large sand seas, while the majority of EQ and SP dune fields are individual dune fields located in craters. Despite the differences between Mars and Earth, their dune and dune field morphologies are strikingly similar. Bullseye dune fields, named for their concentric ring pattern, are the exception, possibly owing their distinctive appearance to winds that are unique to the crater environment. Ground-based wind directions are derived from slipface (SF) orientation and dune centroid azimuth (DCA), a measure of the relative location of a dune field inside a crater. SF and DCA often preserve evidence of different wind directions, suggesting the importance of local, topographically influenced winds. In general however, ground-based wind directions are broadly consistent with expected global patterns, such as polar easterlies. Intriguingly, between 40°S and 80°S latitude both SF and DCA preserve their strongest, though different, dominant wind direction, with transport toward the west and east for SF-derived winds and toward the north and west for DCA-derived winds.

  5. Web services-based access to local clinical trial databases: a standards initiative of the Association of American Cancer Institutes.

    Science.gov (United States)

    Stahl, Douglas C; Evans, Richard M; Afrin, Lawrence B; DeTeresa, Richard M; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center's locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date have been hindered by issues such as disparate database schemas, vocabularies, and insufficient standards for easy intersystem exchange of high-level data, but adequate infrastructure now exists that make possible the development of applications for near real-time automated discovery of trials. This paper describes the current state (design and implementation) of the Web Services Specification for Publication and Discovery of Clinical Trials as developed by the Technology Task Force of the Association of American Cancer Institutes. The paper then briefly discusses a prototype web service-based application that implements the specification. Directions for evolution of this specification are also discussed.

  6. California dragonfly and damselfly (Odonata) database: temporal and spatial distribution of species records collected over the past century.

    Science.gov (United States)

    Ball-Damerow, Joan E; Oboyski, Peter T; Resh, Vincent H

    2015-01-01

    The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879-2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data.

  7. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the benchmark team, PSU in collaboration with US NRC has performed supporting calculations using the PSU in-house advanced thermal-hydraulic subchannel code CTF and the US NRC system code TRACE. CTF is a version of COBRA-TF whose models have been continuously improved and validated by the RDFMG group at PSU. TRACE is a reactor systems code developed by US NRC to analyze transient and steady-state thermal-hydraulic behavior in LWRs and it has been designed to perform best-estimate analyses of LOCA, operational transients, and other accident scenarios in PWRs and BWRs. The paper presents CTF and TRACE models for the PSBT void distribution exercises. Code-to-code and code-to-data comparisons are provided along with a discussion of the void generation and void distribution models available in the two codes.

  8. Florabank1: a grid-based database on vascular plant distribution in the northern part of Belgium (Flanders and the Brussels Capital region

    Directory of Open Access Journals (Sweden)

    Wouter Van Landuyt

    2012-05-01

    Full Text Available Florabank1 is a database that contains distributional data on the wild flora (indigenous species, archeophytes and naturalised aliens of Flanders and the Brussels Capital Region. It holds about 3 million records of vascular plants, dating from 1800 till present. Furthermore, it includes ecological data on vascular plant species, redlist category information, Ellenberg values, legal status, global distribution, seed bank etc. The database is an initiative of “Flo.Wer” (www.plantenwerkgroep.be, the Research Institute for Nature and Forest (INBO: www.inbo.be and the National Botanic Garden of Belgium (www.br.fgov.be. Florabank aims at centralizing botanical distribution data gathered by both professional and amateur botanists and to make these data available to the benefit of nature conservation, policy and scientific research.The occurrence data contained in Florabank1 are extracted from checklists, literature and herbarium specimen information. Of survey lists, the locality name (verbatimLocality, species name, observation date and IFBL square code, the grid system used for plant mapping in Belgium (Van Rompaey 1943, is recorded. For records dating from the period 1972–2004 all pertinent botanical journals dealing with Belgian flora were systematically screened. Analysis of herbarium specimens in the collection of the National Botanic Garden of Belgium, the University of Ghent and the University of Liège provided interesting distribution knowledge concerning rare species, this information is also included in Florabank1. The data recorded before 1972 is available through the Belgian GBIF node (http://data.gbif.org/datasets/resource/10969/, not through FLORABANK1, to avoid duplication of information. A dedicated portal providing access to all published Belgian IFBL records at this moment is available at: http://projects.biodiversity.be/ifblAll data in Florabank1 is georeferenced. Every record holds the decimal centroid coordinates of the

  9. Aerosol size distribution and classification. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    The bibliography contains citations concerning aerosol particle size distribution and classification pertaining to air pollution detection and health studies. Aerosol size measuring methods, devices, and apparatus are discussed. Studies of atmospheric, industrial, radioactive, and marine aerosols are presented.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  10. RAINBIO : A mega-database of tropical African vascular plants distributions

    NARCIS (Netherlands)

    Dauby, Gilles; Zaiss, Rainer; Blach-Overgaard, Anne; Catarino, Luís; Damen, T.H.J.; Deblauwe, Vincent; Dessein, Steven; Dransfield, John; Droissart, Vincent; Duarte, Maria Cristina; Engledow, Henry; Fadeur, Geoffrey; Figueira, Rui; Gereau, Roy E.; Hardy, Olivier J.; Harris, David J.; Heij, De Janneke; Janssens, Steven; Klomberg, Yannick; Ley, Alexandra C.; Mackinder, Barbara A.; Meerts, Pierre; Poel, van de Jeike; Sonké, Bonaventure; Sosef, M.S.M.; Stévart, Tariq; Stoffelen, Piet; Svenning, Jens Christian; Sepulchre, Pierre; Burgt, Van Der Xander; Wieringa, J.J.; Couvreur, T.L.P.

    2016-01-01

    The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improv

  11. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  12. Design and Construction of Microsoft Access of Coronary Heart Disease Clinical Database%Access冠心病数据库的设计及建立

    Institute of Scientific and Technical Information of China (English)

    刘晓宇; 姚茹; 吴大方

    2011-01-01

    结合冠心痛的临床诊断与标准,设计了基于Microsoft Access的冠心病临床病案资料数据库.我院1993年1月-2011年11月冠心痛患者病案资料,以Visual C++为开发工具建立“冠心病临床病案资料数据库”.数据库实现了对冠心病病案资料的存储、添加、查询、修改、删除等功能,可以向Excel软件传送数据并分析.冠心病临床病案资料数据库简单、实用,便于病案的信息化管理,有利于临床诊疗、教学和科研.%Based on the diagnosing and staging national standard for coronary heart disease and microsoft access, we designed a coronary heart disease clinical database. Combined the diabetic patients' document in PLA No 451 hospital during 1993.01 - 2011. 11 to develop coronary heart disease clinical database. Visual C++ was the development tool. The database can save, add, search, modify and delete function. And it can transfer datum to Excel. The database is useful and practical. It is convenient to not only manage the patient information but also clinical cure , clinical teaching and clinical research.

  13. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  14. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  15. Application of open-access databases to determine functional connectivity between resveratrol-binding protein QR2 and colorectal carcinoma.

    Science.gov (United States)

    Doonan, Barbara B; Schaafsma, Evelien; Pinto, John T; Wu, Joseph M; Hsieh, Tze-Chen

    2017-08-01

    Colorectal cancer (CRC) is a major cause of cancer-associated deaths worldwide. Recently, oral administration of resveratrol (trans-3,5,4'-trihydroxystilbene) has been reported to significantly reduce tumor proliferation in colorectal cancer patients, however, with little specific information on functional connections. The pathogenesis and development of colorectal cancer is a multistep process that can be categorized using three phenotypic pathways, respectively, chromosome instability (CIN), microsatellite instability (MSI), and CpG island methylator (CIMP). Targets of resveratrol, including a high-affinity binding protein, quinone reductase 2 (QR2), have been identified with little information on disease association. We hypothesize that the relationship between resveratrol and different CRC etiologies might be gleaned using publicly available databases. A web-based microarray gene expression data-mining platform, Oncomine, was selected and used to determine whether QR2 may serve as a mechanistic and functional biotarget within the various CRC etiologies. We found that QR2 messenger RNA (mRNA) is overexpressed in CRC characterized by CIN, particularly in cells showing a positive KRAS (Kirsten rat sarcoma viral oncogene homolog) mutation, as well as by the MSI but not the CIMP phenotype. Mining of Oncomine revealed an excellent correlation between QR2 mRNA expression and certain CRC etiologies. Two resveratrol-associated genes, adenomatous polyposis coli (APC) and TP53, found in CRC were further mined, using cBio portal and Colorectal Cancer Atlas which predicted a mechanistic link to exist between resveratrol→QR2/TP53→CIN. Multiple web-based data mining can provide valuable insights which may lead to hypotheses serving to guide clinical trials and design of therapies for enhanced disease prognosis and patient survival. This approach resembles a BioGPS, a capability for mining web-based databases that can elucidate the potential links between compounds to

  16. A geographic distribution database of Mononychellus mites (Acari, Tetranychidae) on cassava (Manihot esculenta).

    Science.gov (United States)

    Vásquez-Ordóñez, Aymer Andrés; Parsa, Soroush

    2014-01-01

    The genus Mononychellus is represented by 28 herbivorous mites. Some of them are notorious pests of cassava (Manihot esculenta Crantz), a primary food crop in the tropics. With the exception of Mononychellus tanajoa (Bondar), their geographic distribution is not widely known. This article therefore reports observational and specimen-based occurrence data of Mononychellus species associated with cassava. The dataset consists of 1,513 distribution records documented by the International Center for Tropical Agriculture (CIAT) between 1975 and 2012. The specimens are held at CIAT's Arthropod Reference Collection (CIATARC). Most of the records are from the genus' native range in South America and were documented between 1980 and 2000. Approximately 61% of the records belong to M. tanajoa, 25% to M. caribbeanae (McGregor), 10% to M. mcgregori (Flechtmann and Baker) and 2% to M. planki (McGregor). The complete dataset is available in Darwin Core Archive format via the Global Biodiversity Information Facility (GBIF).

  17. A comprehensive analysis of electron conical distributions from multi-satellite databases

    Science.gov (United States)

    Menietti, J. Douglas

    1993-01-01

    This report consists of a copy of a paper that has been submitted to the 'Journal of Geophysical Research', entitled 'DE 1 and Viking Observations Associated With Electron Conical Distributions,' and an abstract of another paper (included as an appendix to the report) that is about to be submitted to the same journal entitled 'Perpendicular Electron Heating by Absorption of Auroral Kilometric Radiation.' A bibliography of other papers that have been published as a result of this project follows. The purpose of this project was to use the DE 1 and Viking particle and wave data to better understand the source mechanism of electron conical distributions. We have shown that electron conics are often associated with upper hybrid waves in the nightside auroral region. We have also shown that electron conics are observed near auroral kilometric radiation (AKR) source regions and may be the result of perpendicular heating due to waves. We have completed a statistical study of electron conics observed by DE-l and Viking. The study shows the occurrence frequency and location of electron conical distributions; there are some differences between the results of DE and Viking perhaps due to different regions sampled.

  18. 某基础数据库数据分布特点及模型算法%Data Distribution Characteristics and Model Algorithm of a Basic Database

    Institute of Scientific and Technical Information of China (English)

    刘智宾; 李磊磊; 许楠

    2012-01-01

    scientificalness starts with the Distributed database is the main technology for constructing basic data service, however, the of data distribution directly determines the stability and service efficiency of a database. This paper basic strategy of data distribution, systematically analyzes the application characteristics of a foundation database, generalizes the basic principle of database distribution, improves distribution strategy model of partitioned database, and uses heuristic algorithm as the foundation to form hybrid data distribution model algorithm on specific duplicate number and distribution region.%分布式数据库是构建基础数据服务的主要技术,而数据分布的科学性直接决定数据库的稳定性和服务效率;文章从数据分布的基本策略入手,系统分析了某基础数据库的应用特点,总结出数据分布的基本原则,并对分割式数据分布策略模型进行改进,以启发式算法为基础形成对特定复本数和分布地域的混合式数据分布模型算法。

  19. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  20. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  1. Estimating spatial distribution of soil organic carbon for the Midwestern United States using historical database.

    Science.gov (United States)

    Kumar, Sandeep

    2015-05-01

    Soil organic carbon (SOC) is the most important parameter influencing soil health, global climate change, crop productivity, and various ecosystem services. Therefore, estimating SOC at larger scales is important. The present study was conducted to estimate the SOC pool at regional scale using the historical database gathered by the National Soil Survey Staff. Specific objectives of the study were to upscale the SOC density (kg C m(-2)) and total SOC pool (PgC) across the Midwestern United States using the geographically weighted regression kriging (GWRK), and compare the results with those obtained from the geographically weighted regression (GWR) using the data for 3485 georeferenced profiles. Results from this study support the conclusion that the GWRK produced satisfactory predictions with lower root mean square error (5.60 kg m(-2)), mean estimation error (0.01 kg m(-2)) and mean absolute estimation error (4.30 kg m(-2)), and higher R(2) (0.58) and goodness-of-prediction statistic (G=0.59) values. The superiority of this approach is evident through a substantial increase in R(2) (0.45) compared to that for the global regression (R(2)=0.28). Croplands of the region store 16.8 Pg SOC followed by shrubs (5.85 Pg) and forests (4.45 Pg). Total SOC pool for the Midwestern region ranges from 31.5 to 31.6 Pg. This study illustrates that the GWRK approach explicitly addresses the spatial dependency and spatial non-stationarity issues for interpolating SOC density across the regional scale.

  2. Open Core Data: Semantic driven data access and distribution for terrestrial and marine scientific drilling data

    Science.gov (United States)

    Fils, D.; Noren, A. J.; Lehnert, K. A.

    2015-12-01

    Open Core Data (OCD) is a science-driven, innovative, efficient, and scalable infrastructure for data generated by scientific drilling and coring projects across all Earth sciences. It is designed to make make scientific drilling data semantically discoverable, persistent, citable, and approachable to maximize their utility to present and future geoscience researchers. Scientific drilling and coring is crucial for the advancement of the Earth Sciences, unlocking new frontiers in the geologic record. Open Core Data will utilize and link existing data systems, services, and expertise of the JOIDES Resolution Science Operator (JRSO), the Continental Scientific Drilling Coordination Office (CSDCO), the Interdisciplinary Earth Data Alliance (IEDA) data facility, and the Consortium for Ocean Leadership (OL). Open Core Data will leverage efforts currently taking place under the EarthCube GeoLink Building Block and other previous efforts in Linked Open Data around ocean drilling data coordinated by OL. The OCD architecture for data distribution blends Linked Data Platform approaches with web services and schema.org use. OCD will further enable integration and tool development by assigning and using vocabularies, provenance, and unique IDs (DOIs, IGSN, URIs) in scientific drilling resources. A significant focus of this effort is to enable large scale automated access to the data by domain specific communities such as MagIC and Neotoma. Providing them a process to integrate the facility data into their data models, workflows and tools. This aspect will encompass methods to maintain awareness of authority information enabling users to trace data back to the originating facility. Initial work on OCD is taking place under a supplemental awarded to IEDA. This talk gives an overview of that work to date and planned future directions for the distribution of scientific drilling data by this effort.

  3. A new global database to improve predictions of permeability distribution in crystalline rocks at site scale

    Science.gov (United States)

    Achtziger-Zupančič, P.; Loew, S.; Mariéthoz, G.

    2017-05-01

    A comprehensive worldwide permeability data set has been compiled consisting of 29,000 in situ permeabilities from 221 publications and reports and delineating the permeability distribution in crystalline rocks into depths of 2000 meters below ground surface (mbgs). We analyze the influence of technical factors (measurement method, scale effects, preferential sampling, and hydraulic anisotropy) and geological factors (lithology, current stress regime, current seismotectonic activity, and long-term tectonogeological history) on the permeability distribution with depth, by using regression analysis and k-means clustering. The influence of preferential sampling and hydraulic anisotropy are negligible. A scale dependency is observed based on calculated rock test volumes equaling 0.6 orders of magnitude of permeability change per order of magnitude of rock volume tested. Based on the entire data set, permeability decreases as log(k) = -1.5 × log(z) - 16.3 with permeability k (m2) and positively increasing depth z (km), and depth is the main factor driving the permeability distribution. The permeability variance is about 2 orders of magnitude at all depths, presumably representing permeability variations around brittle fault zones. Permeability and specific yield/storage exhibit similar depth trends. While in the upper 200 mbgs fracture flow varies between confined and unconfined, we observe confined fracture and matrix flow below about 600 mbgs depth. The most important geological factors are current seismotectonic activity (determined by peak ground acceleration) and long-term tectonogeological history (determined by geological province). The impact of lithology is less important. Based on the regression coefficients derived for all the geological key factors, permeability ranges of crystalline rocks at site scale can be predicted. First tests with independent data sets are promising.

  4. Global spatiotemporal distribution of soil respiration modeled using a global database

    Directory of Open Access Journals (Sweden)

    S. Hashimoto

    2015-03-01

    3.3 Pg C yr-1 °C−1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. Our results, including the modeled spatiotemporal distribution of global soil respiration, are based on a semi-empirical model parameterized with over one thousand data points. We expect that these spatiotemporal estimates will provide a benchmark for future studies and also help to constrain process-oriented models.

  5. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    Science.gov (United States)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; Pobre, Zed; Bell, Gavin M.; Drach, Bob; Williams, Dean; Kershaw, Philip; Pascoe, Stephen; Gonzalez, Estanislao; Fiore, Sandro; Schweitzer, Roland

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  6. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data

    Energy Technology Data Exchange (ETDEWEB)

    Cinquini, Luca [Jet Propulsion Laboratory, Pasadena, CA; Crichton, Daniel [Jet Propulsion Laboratory, Pasadena, CA; Miller, Neill [Argonne National Laboratory (ANL); Mattmann, Chris [Jet Propulsion Laboratory, Pasadena, CA; Harney, John F [ORNL; Shipman, Galen M [ORNL; Wang, Feiyi [ORNL; Bell, Gavin [Lawrence Livermore National Laboratory (LLNL); Drach, Bob [Lawrence Livermore National Laboratory (LLNL); Ananthakrishnan, Rachana [Argonne National Laboratory (ANL); Pascoe, Stephen [STFC Rutherford Appleton Laboratory, NCAS/BADC; Kershaw, Philip [STFC Rutherford Appleton Laboratory, NCAS/BADC; Gonzalez, Estanislao [German Climate Computing Center; Fiore, Sandro [Euro-Mediterranean Center on Climate Change; Schweitzer, Roland [Pacific Marine Environmental Laboratory, National Oceanic and Atmospheric Administration; Danvil, Sebastian [Institut Pierre Simon Laplace (IPSL), Des Sciences de L' Environnement; Morgan, Mark [Institut Pierre Simon Laplace (IPSL), Des Sciences de L' Environnement

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  7. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data

    Energy Technology Data Exchange (ETDEWEB)

    Ananthakrishnan, Rachana [Argonne National Laboratory (ANL); Bell, Gavin [Lawrence Livermore National Laboratory (LLNL); Cinquini, Luca [Jet Propulsion Laboratory, Pasadena, CA; Crichton, Daniel [Jet Propulsion Laboratory, Pasadena, CA; Danvil, Sebastian [Institut Pierre Simon Laplace (IPSL), Des Sciences de L' Environnement; Drach, Bob [Lawrence Livermore National Laboratory (LLNL); Fiore, Sandro [Euro-Mediterranean Center on Climate Change; Gonzalez, Estanislao [German Climate Computing Center; Harney, John F [ORNL; Mattmann, Chris [Jet Propulsion Laboratory, Pasadena, CA; Kershaw, Philip [STFC Rutherford Appleton Laboratory, NCAS/BADC; Miller, Neill [Argonne National Laboratory (ANL); Morgan, Mark [Institut Pierre Simon Laplace (IPSL), Des Sciences de L' Environnement; Pascoe, Stephen [STFC Rutherford Appleton Laboratory, NCAS/BADC; Schweitzer, Roland [Pacific Marine Environmental Laboratory, National Oceanic and Atmospheric Administration; Shipman, Galen M [ORNL; Wang, Feiyi [ORNL

    2013-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  8. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    Science.gov (United States)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  9. Joint Distributed Access Point Selection and Power Allocation in Cognitive Radio Networks

    CERN Document Server

    Hong, Mingyi; Alviar, Jorge

    2011-01-01

    Spectrum management has been identified as a crucial step towards enabling the technology of the cognitive radio network (CRN). Most of the current works dealing with spectrum management in the CRN focus on a single task of the problem, e.g., spectrum sensing, spectrum decision, spectrum sharing or spectrum mobility. In this work, we argue that for certain network configurations, jointly performing several tasks of the spectrum management improves the spectrum efficiency. Specifically, we study the uplink resource management problem in a CRN where there exist multiple cognitive users (CUs) and access points (APs), with each AP operates on a set of non-overlapping channels. The CUs, in order to maximize their uplink transmission rates, have to associate to a suitable AP (spectrum decision), and to share the channels belong to this AP with other CUs (spectrum sharing). These tasks are clearly interdependent, and the problem of how they should be carried out efficiently and distributedly is still open in the lit...

  10. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  11. Research on text database accessing mechanism in Visual Basic%Visual Basic中文本文件数据库的访问机制探讨

    Institute of Scientific and Technical Information of China (English)

    崔政

    2001-01-01

    读写Visual Basic文本文件记录,可以以数据识别类为数据源,把窗体上的控件通过BindingCollection对象绑定到记录集的字段上,从文本文件中将数据读到ADO记录集,再利用ADO的特性来操作数据.此项技术对非关系数据库的存取有通用的技术意义.%Reading and writing Visual Basic text file record can be realized by the step follows: regarding data class of cognition as data source firstly, binding the forms controllers to recordset through BindingCollection, reading the data from text file record to ADO recordset, then, accessing the data by ADO peculiarity. The technique is applicable for all non-relation database.

  12. Mining for Murder-Suicide: An Approach to Identifying Cases of Murder-Suicide in the National Violent Death Reporting System Restricted Access Database.

    Science.gov (United States)

    McNally, Matthew R; Patton, Christina L; Fremouw, William J

    2016-01-01

    The National Violent Death Reporting System (NVDRS) is a United States Centers for Disease Control and Prevention (CDC) database of violent deaths from 2003 to the present. The NVDRS collects information from 32 states on several types of violent deaths, including suicides, homicides, homicides followed by suicides, and deaths resulting from child maltreatment or intimate partner violence, as well as legal intervention and accidental firearm deaths. Despite the availability of data from police narratives, medical examiner reports, and other sources, reliably finding the cases of murder-suicide in the NVDRS has proven problematic due to the lack of a unique code for murder-suicide incidents and outdated descriptions of case-finding procedures from previous researchers. By providing a description of the methods used to access to the NVDRS and coding procedures used to decipher these data, the authors seek to assist future researchers in correctly identifying cases of murder-suicide deaths while avoiding false positives.

  13. Designing a Framework to Develop WEB Graphical Interfaces for ORACLE Databases - Web Dialog

    Directory of Open Access Journals (Sweden)

    Georgiana-Petruţa Fîntîneanu

    2009-01-01

    Full Text Available The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database, JSP and Tag Library (for the development of graphical interfaces.

  14. Experimental access to Transition Distribution Amplitudes with the PANDA experiment at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Singh, B.P. [Aligarh Muslim Univ. (India). Physics Dept.; Erni, W.; Keshelashvili, I. [Basel Univ. (Switzerland); Collaboration: The PANDA Collaboration; and others

    2015-08-15

    Baryon-to-meson Transition Distribution Amplitudes (TDAs) encoding valuable new information on hadron structure appear as building blocks in the collinear factorized description for several types of hard exclusive reactions. In this paper, we address the possibility of accessing nucleon-to-pion (πN) TDAs from anti pp → e{sup +}e{sup -}π{sup 0} reaction with the future PANDA detector at the FAIR facility. At high center-of-mass energy and high invariant mass squared of the lepton pair q{sup 2}, the amplitude of the signal channel anti pp → e{sup +}e{sup -}π{sup 0} admits a QCD factorized description in terms of πN TDAs and nucleon Distribution Amplitudes (DAs) in the forward and backward kinematic regimes. Assuming the validity of this factorized description, we perform feasibility studies for measuring anti pp → e{sup +}e{sup -}π{sup 0} with the PANDA detector. Detailed simulations on signal reconstruction efficiency as well as on rejection of the most severe background channel, i.e. anti pp → π{sup +}π{sup -}π{sup 0} were performed for the center-of-mass energy squared s = 5 GeV{sup 2} and s = 10 GeV{sup 2}, in the kinematic regions 3.0 < q{sup 2} < 4.3 GeV{sup 2} and 5 < q{sup 2} GeV{sup 2}, respectively, with a neutral pion scattered in the forward or backward cone vertical stroke cos θ{sub π{sup 0}} vertical stroke > 0.5 in the proton-antiproton center-of-mass frame. Results of the simulation show that the particle identification capabilities of the PANDA detector will allow to achieve a background rejection factor of 5 . 10{sup 7} (1 . 10{sup 7}) at low (high) q{sup 2} for s = 5 GeV{sup 2}, and of 1 . 10{sup 8} (6 . 10{sup 6}) at low (high) q{sup 2} for s = 10 GeV{sup 2}, while keeping the signal reconstruction efficiency at around 40%. At both energies, a clean lepton signal can be reconstructed with the expected statistics corresponding to 2 fb{sup -1} of integrated luminosity. The cross sections obtained from the simulations are used to

  15. [Public scientific knowledge distribution in health information, communication and information technology indexed in MEDLINE and LILACS databases].

    Science.gov (United States)

    Packer, Abel Laerte; Tardelli, Adalberto Otranto; Castro, Regina Célia Figueiredo

    2007-01-01

    This study explores the distribution of international, regional and national scientific output in health information and communication, indexed in the MEDLINE and LILACS databases, between 1996 and 2005. A selection of articles was based on the hierarchical structure of Information Science in MeSH vocabulary. Four specific domains were determined: health information, medical informatics, scientific communications on healthcare and healthcare communications. The variables analyzed were: most-covered subjects and journals, author affiliation and publication countries and languages, in both databases. The Information Science category is represented in nearly 5% of MEDLINE and LILACS articles. The four domains under analysis showed a relative annual increase in MEDLINE. The Medical Informatics domain showed the highest number of records in MEDLINE, representing about half of all indexed articles. The importance of Information Science as a whole is more visible in publications from developed countries and the findings indicate the predominance of the United States, with significant growth in scientific output from China and South Korea and, to a lesser extent, Brazil.

  16. Genome databases

    Energy Technology Data Exchange (ETDEWEB)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  17. Simultaneous detection of four garlic viruses by multiplex reverse transcription PCR and their distribution in Indian garlic accessions.

    Science.gov (United States)

    Majumder, S; Baranwal, V K

    2014-06-01

    Indian garlic is infected with Onion yellow dwarf virus (OYDV), Shallot latent virus (SLV), Garlic common latent virus (GarCLV) and allexiviruses. Identity and distribution of garlic viruses in various garlic accessions from different geographical regions of India were investigated. OYDV and allexiviruses were observed in all the garlic accessions, while SLV and GarCLV were observed only in a few accessions. A multiplex reverse transcription (RT)-PCR method was developed for the simultaneous detection and identification of OYDV, SLV, GarCLV and Allexivirus infecting garlic accessions in India. This multiplex protocol standardized in this study will be useful in indexing of garlic viruses and production of virus free seed material.

  18. Accessing Generalized Parton Distributions in Exclusive Photoproduction of a $\\gamma \\rho$ Pair with a Large Invariant Mass

    CERN Document Server

    Boussarie, R; Szymanowski, L; Wallon, S

    2016-01-01

    We propose and study the photoproduction of a $\\gamma\\,\\rho$ pair with a large invariant mass and a small transverse momentum of the final nucleon, as a way to access generalized parton distributions. In the kinematics of JLab 12-GeV, we demonstrate the feasibility of this measurement.

  19. Madrigal - Lessons Learned from 25 years of Evolution from a Single-Instrument Database to a Distributed Virtual Observatory

    Science.gov (United States)

    Holt, J. M.; Rideout, W.; van Eyken, T.

    2005-12-01

    Madrigal is a distributed, open source virtual observatory which has been operational for 25 years. During that time it has evolved from a simple database system for the Millstone Hill Incoherent Scatter Radar to a full-featured virtual observatory distributed among five major sites. Madrigal is interoperable with the CEDAR Database and, in addition to being the primary data repository for incoherent scatter radar data, contains data from many other ground-based space science instruments. Madrigal features a well-defined metadata standard, real-time capability, an interactive Web interface, provision for linking ancillary information such as html pages and figures to data, interactive plotting and a complete Web-services interface. A number of important lessons have been learned from the Madrigal project: systems such as Madrigal depend critically on robust data and metadata standards; they need to be a community project; they must permit user interface improvements to be shared across the community; they require a standard, robust interface; scientific efforts using systems such as Madrigal can lead directly to improvements in the system. An example of the last has been the development of several climatological models from Madrigal data. Several features of Madrigal, such as a global search capability, were added in response to requests from the model developers. The models have recently been incorporated into Madrigal and provide a powerful basis for event discovery based on deviations of data from the climatological average. Madrigal will never completely solve the VO problem, but it will make life much easier for future VO projects.

  20. [HPA distribution characteristics of platelet donor population in Mudanjiang area of China and establishment of its database].

    Science.gov (United States)

    Liu, Bing-Xian; Gao, Guang-Ping; Wang, Dan; Zhang, Yan; Yu, Xiu-Qing; Xia, Dong-Mei; Zhou, Rui-Hua; Zhang, Hua; Ma, Qiang; Liu, Jie

    2012-06-01

    This study was aimed to explore the distribution characteristics of the human platelet antigen (HPA) gene of human platelet donors and its polymorphism in Mudanjiang area of Heilongjiang Province in China, to determine platelet antigen system with clinical significance by judging the rate of incompatibility of HPA, as well as to establish a database of donors' HPA. The genotyping of 154 unrelated platelet donors was performed by means of PCR-SSP. The frequencies of gene and genotype were calculated and compared with that in other areas. The results showed that the genes 1a-17a of HPA-a were all expressed in the 154 healthy and unrelated platelet donors. Only genes 1b, 2b, 3b, 5b, 6b and 15b of HPA-b were expressed while genes 4b, 7b-14b, 16b were not expressed. Among the genotypes, aa homozygosity was predominant and HPA15 had the greatest heterozygosity, while HPA3 had lower heterozygosity. There were 23 combined types of HPA, 5 of them had a rate higher than 10%, and the frequencies of the other 18 were lower than 8%. HPA genotype frequencies showed a good consistency to Hardy-Weinberg equilibrium. It is concluded that the distribution of the allele polymorphism of HPA1-HPA17 in Mudanjiang area has its own characteristics, compared with other areas and some countries, the local HPA genotype database of platelet donors is established in Mudanjiang area, which can provide the matching donors for clinical use with immunological significance.

  1. California dragonfly and damselfly (Odonata database: temporal and spatial distribution of species records collected over the past century

    Directory of Open Access Journals (Sweden)

    Joan E. Ball-Damerow

    2015-02-01

    Full Text Available The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879–2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data.

  2. Fast Incremental and Personalized PageRank over Distributed Main Memory Databases

    CERN Document Server

    Bahmani, Bahman; Goel, Ashish

    2010-01-01

    In this paper, we analyze the efficiency of Monte Carlo methods for incremental computation of PageRank, personalized PageRank, and similar random walk based methods (with focus on SALSA), on large-scale dynamically evolving social networks. We assume that the graph of friendships is stored in distributed shared memory, as is the case for large social networks such as Twitter. For global PageRank, we assume that the social network has $n$ nodes, and $m$ adversarially chosen edges arrive in a random order. We show that with a reset probability of $\\epsilon$, the total work needed to maintain an accurate estimate (using the Monte Carlo method) of the PageRank of every node at all times is $O(\\frac{n\\log m}{\\epsilon^{2}})$. This is significantly better than all known bounds for incremental PageRank. For instance, if we naively recompute the PageRanks as each edge arrives, the simple power iteration method needs $\\Omega(\\frac{m^2}{\\log(1/(1-\\epsilon))})$ total time and the Monte Carlo method needs $O(mn/\\epsilon)...

  3. Model checking software for phylogenetic trees using distribution and database methods.

    Science.gov (United States)

    Requeno, José Ignacio; Colom, José Manuel

    2013-11-14

    Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence) and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  4. Access Database Application in Fault Diagnostic Software%Access数据库在故障诊断软件中的应用

    Institute of Scientific and Technical Information of China (English)

    汪荣会

    2016-01-01

    Access databa se technology application in fault diagnostic sofeware, which can configures flexible engine control unit (ECU) diagnosis information, such as ECU type, protocol type, diagnostic trouble codes, etc. Diagnostic software will be have good scalability, improve up software update convenient. Function more complex diesel engine electric control unit as example,introduce Access database application in fault diagnostic software.%在故障诊断软件中应用Access数据库技术,可以灵活地配置诊断软件支持的ECU型号、诊断协议和诊断功能,使诊断软件具备良好的扩展性,提高了软件升级的便利性。以功能较为复杂的柴油机电控单元诊断软件为例,介绍Access数据库在故障诊断软件中的应用。

  5. RTDB: A memory resident real-time object database

    Energy Technology Data Exchange (ETDEWEB)

    Jerzy M. Nogiec; Eugene Desavouret

    2003-06-04

    RTDB is a fast, memory-resident object database with built-in support for distribution. It constitutes an attractive alternative for architecting real-time solutions with multiple, possibly distributed, processes or agents sharing data. RTDB offers both direct and navigational access to stored objects, with local and remote random access by object identifiers, and immediate direct access via object indices. The database supports transparent access to objects stored in multiple collaborating dispersed databases and includes a built-in cache mechanism that allows for keeping local copies of remote objects, with specifiable invalidation deadlines. Additional features of RTDB include a trigger mechanism on objects that allows for issuing events or activating handlers when objects are accessed or modified and a very fast, attribute based search/query mechanism. The overall architecture and application of RTDB in a control and monitoring system is presented.

  6. Research on Security Policy Based on the Development of Access Database%基于Access开发的安全策略研究

    Institute of Scientific and Technical Information of China (English)

    宋清文

    2014-01-01

    In order to meet the security requirements of Access development,to give up its own pass-word protection,use the new security strategy,through the experimental study,to prohibit the direct use of the Access built-in function in the development of products,design of user login authentication system and set it to boot priority,using VBA code to protect sensitive information with encryption and decryption,and basis user role to enable or disable the key using.The results show that after verification the super user (Programmer)can use the key stop form start and modify the database obj ects and code,other users through validation can only enter the main form of each to use the software,the illegal user can not snoop,use and modify,to ensure the safety of Access product development.Practice has proved that,after using this security strategy,the user will no longer be troubled by the problem of security of Access,and the Access can be played as a greater role.%为了满足 Access开发的安全性需求,放弃其自带的密码保护,采用全新的安全策略,在开发的产品中禁止直接使用 Access自带的功能,设计用户登录验证系统,并将其设为优先启动,利用 VBA 代码对敏感信息进行加密解密保护,并依据用户角色启用或禁用键。实验结果表明,超级用户(程序员)通过验证后可用键阻止窗体启动来修改数据库对象和代码,其他用户通过验证后只能进入各自的主窗体来使用软件,非法用户则无法窥探、使用和修改,从而确保了 Access开发产品的安全性。实践证明,采用这种安全策略后,Access的安全问题将不再困扰用户,能让 Access发挥更大的作用。

  7. 基于分布式处理技术的物联网数据库研究和设计%Design of IOT database based on distributed processing technology

    Institute of Scientific and Technical Information of China (English)

    李娜; 刘俊辉

    2012-01-01

    针对物联网中的数据库管理问题,通过对物联网技术的研究,结合分布式处理技术、P2P(对等网)点云计算技术,提出了一个基于分布式处理技术的物联网数据库设计方法.在此以医疗系统物联网为例,验证了该方法在一定程度上能够解决物联网中的数据库管理问题,从而为物联网技术与数据库技术、网络技术、中间件技术等的技术结合提供了技术支撑.%In recent years, with the repid development of Internet of things (IOT) technology, a new challenge has been posed on data storage and access. Focusing on the database management of IOT, a design method of IOT database based on distributed processing technology (DPT) was proposed by means of computation method of peer-to-peer network (P2P) point clouds. An example of medical system IOT verifies that the mothed can solve the problem of IOT database management to a certain extent and provide technical support for IOT in combination with database technology, network technology, and middleware technology.

  8. Research and Implementation of Distributed Database Technology%分布式数据库技术的研究与实现

    Institute of Scientific and Technical Information of China (English)

    杨东; 谢菲; 杨晓刚; 何遵文

    2015-01-01

    Distributed Database technology is progeny combining database technology with computer network technology. It involves parallel computing, distribution strategy ,data slice ,query optimization and distributed database concurrency control, transaction processing and recovery, etc. This paper aims to study the key technology of the distributed database, to find a new,general-purpose distributed database model to adapt to a variety of complex application scenarios ,with large data change in response to increasingly challenge.%分布式数据库技术是数据库技术与计算机网络技术[1]相结合的产物,主要技术涉及并行计算、分布策略、数据分片、查询优化以及分布式数据库系统的并发控制[2]、事务处理与恢复技术等。本文旨在通过对分布式数据库关键技术的研究,找到一种新型的、适应多种复杂应用场景的通用型分布式数据库模型,以应对日益凸显的大数据变革带来的挑战。

  9. Threshold detection for the generalized Pareto distribution: Review of representative methods and application to the NOAA NCDC daily rainfall database

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto

    2016-04-01

    In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.

  10. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  11. 浅析情景教学法在Access数据库课程教学过程中的应用%On the situational approach applied in Access database Teaching Process

    Institute of Scientific and Technical Information of China (English)

    李艳红; 苏有邦

    2016-01-01

    本文针对学生当前学习Access数据库中存在的学时少,实践性、操作性强及理论知识抽象难懂等问题,提出将情景教学法应用于Access数据库的日常实际教学中,阐述了情景教学法在Access数据库的理论和实践教学中的具体应用和优势。%In this paper,students Access database exists in the current hours less practical,operational and theoretical knowledge is abstract and difficult issues such as situational approach applied to the proposed Access database everyday practical teaching,elaborated situational approach in theory and practice of teaching Access database specific applications and advantages.

  12. THE METHOD OF ACCESSING REMOTE DATABASE AND LONG DATA TYPE IN VBA%VBA访问远程数据库及长数据类型的方法

    Institute of Scientific and Technical Information of China (English)

    马瑞民; 马永生; 张方舟

    2001-01-01

    本文介绍了用WordVBA开发网络多媒体数据库时涉及的访问远程数据库、ADO技术以及存取长数据类型等问题的解决方法,并对使用扩充关系数据库的方法实现多媒体数据库,利用WordVBA/SQLServer分别作为前/后端编程工具作了探讨。%This paper introduces the mothod of accessing remote database,ADOand accessing long data type in developing multimedia databases using Word VBA. It discusses the method to implement multimedia databases using the method of expanding relational databases and to consider separately as coding tools of proscenium/backstage.

  13. Converged wireline and wireless signal distribution in optical fiber access networks

    DEFF Research Database (Denmark)

    Prince, Kamau

    This thesis presents results obtained during the course of my doctoral studies into the transport of fixed and wireless signaling over a converged otpical access infrastructure. In the formulation, development and assessment of a converged paradigma for multiple-services delivery via optical access...

  14. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  15. Dealing with an information overload of health science data: structured utilisation of libraries, distributed knowledge in databases and Web content.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael

    2006-01-01

    The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.

  16. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  17. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  18. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  19. A database on the distribution of butterflies (Lepidoptera) in northern Belgium (Flanders and the Brussels Capital Region).

    Science.gov (United States)

    Maes, Dirk; Vanreusel, Wouter; Herremans, Marc; Vantieghem, Pieter; Brosens, Dimitri; Gielen, Karin; Beck, Olivier; Van Dyck, Hans; Desmet, Peter; Natuurpunt, Vlinderwerkgroep

    2016-01-01

    In this data paper, we describe two datasets derived from two sources, which collectively represent the most complete overview of butterflies in Flanders and the Brussels Capital Region (northern Belgium). The first dataset (further referred to as the INBO dataset - http://doi.org/10.15468/njgbmh) contains 761,660 records of 70 species and is compiled by the Research Institute for Nature and Forest (INBO) in cooperation with the Butterfly working group of Natuurpunt (Vlinderwerkgroep). It is derived from the database Vlinderdatabank at the INBO, which consists of (historical) collection and literature data (1830-2001), for which all butterfly specimens in institutional and available personal collections were digitized and all entomological and other relevant publications were checked for butterfly distribution data. It also contains observations and monitoring data for the period 1991-2014. The latter type were collected by a (small) butterfly monitoring network where butterflies were recorded using a standardized protocol. The second dataset (further referred to as the Natuurpunt dataset - http://doi.org/10.15468/ezfbee) contains 612,934 records of 63 species and is derived from the database http://waarnemingen.be, hosted at the nature conservation NGO Natuurpunt in collaboration with Stichting Natuurinformatie. This dataset contains butterfly observations by volunteers (citizen scientists), mainly since 2008. Together, these datasets currently contain a total of 1,374,594 records, which are georeferenced using the centroid of their respective 5 × 5 km² Universal Transverse Mercator (UTM) grid cell. Both datasets are published as open data and are available through the Global Biodiversity Information Facility (GBIF).

  20. The TJ-II Relational Database Access Library: A User's Guide; Libreria de Acceso a la Base de Datos Relacional de TJ-II: Guia del Usuario

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Portas, A. B.; Vega, J.

    2003-07-01

    A relational database has been developed to store data representing physical values from TJ-II discharges. This new database complements the existing TJ-EI raw data database. This database resides in a host computer running Windows 2000 Server operating system and it is managed by SQL Server. A function library has been developed that permits remote access to these data from user programs running in computers connected to TJ-II local area networks via remote procedure cali. In this document a general description of the database and its organization are provided. Also given are a detailed description of the functions included in the library and examples of how to use these functions in computer programs written in the FORTRAN and C languages. (Author) 8 refs.

  1. The Jungle Database Search Engine

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Dyreson, Curtis

    1999-01-01

    Information spread in in databases cannot be found by current search engines. A database search engine is capable to access and advertise database on the WWW. Jungle is a database search engine prototype developed at Aalborg University. Operating through JDBC connections to remote databases, Jungle...

  2. A Distributed Architecture for Sharing Ecological Data Sets with Access and Usage Control Guarantees

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gonzalez, Javier; Granados, Joel Andres

    2014-01-01

    and usage control is necessary to enforce existing open data policies. We have proposed the vision of trusted cells: A decentralized infrastructure, based on secure hardware running on devices equipped with trusted execution environments at the edges of the Internet. We originally described the utilization...... new insights, there are signicant barriers to the realization of this vision. One of the key challenge is to allow scientists to share their data widely while retaining some form of control over who accesses this data (access control) and more importantly how it is used (usage control). Access...... data sets with access and usage control guarantees. We rely on examples from terrestrial research and monitoring in the arctic in the context of the INTERACT project....

  3. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database...pecifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database...pan is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  4. Deep Web database selection using topic distribution%主题分布在Deep Web数据库选择中的应用

    Institute of Scientific and Technical Information of China (English)

    郑东; 施化吉

    2013-01-01

      随着越来越多的信息隐藏在Deep Web中,针对用户查询找出最相关的Web数据库成为亟待解决的问题。提出了一种基于Web数据库主题分布的方法用于Deep Web数据集成中的Web数据库选择。获取主题覆盖度形式的Web数据库内容描述,而后利用选定的Web数据库获取查询主题,最终由查询主题和主题分布矩阵来选择Web数据库。在真实Web数据库上的实验结果表明,该方法既取得了较高的查询召回率,也可有效降低数据库内容描述建立的代价。%Because of more and more data nestled in Deep Web, how to find the most relevant Web databases for user’s query requirements has become a problem demanding prompt solution. An approach based on topic distribution of Web database is proposed for Web database selection of Deep Web data integration. It acquires the content summary of Web database in the form of topic coverage, and then gets the topics of user query by using the appointed Web database. The database selection is made under query topics and topic coverage distribution matrix. The experiments on the real Web database have proved that this approach can not only achieve high recall, but also reduce price of building database content summary.

  5. 基于Access数据库的家庭健康监护系统设计%Household Health Monitoring System Based on Access Database

    Institute of Scientific and Technical Information of China (English)

    孙晶晶; 吴效明

    2012-01-01

    Objective To design a family-oriented intelligent health monitoring system in order to detect and record user's health status and provide long -term and continuous health -monitoring. Methods User's physiology information of ECG, pulse, oxygen saturation, breath, blood pressure, body temperature, height and weight were gotten by wearable detection technology, combined with their daily habits and the information of disease and sub-health, the system could estimate user's health status, and medical advice could be given according to the results. Access database was used to save all kinds of information and create user's personal health file. Visual C++ 6.0 platform was used to develop the application, and ADO database operation technology was used to realize the access to the database. Results The system realized the function of health monitoring by man-machine interaction with software, which possessed adaptability, security, stability and user - friendly characteristics. Conclusion The system records user's information of physiology, daily habits, disease and sub - health, gives them advice to take precautions when potential dangerous is found, which will be helpful to improve people's healthy quality, and promote health care reform.[Chinese Medical Equipment Journal,2011,33(3):21-24]%目的:研究设计一种面向家庭的健康监护系统,该系统对家庭用户的健康状况进行定期的检测和记录,提供长期、连续的健康监护.方法:通过穿戴式检测技术,定期对用户的心电、脉搏、血氧饱和度、呼吸、血压、体温以及身高、体质量进行检测,并结合用户日常的生活习惯及疾病亚健康情况,对其健康状况进行评估,根据评估结果给予相应的医疗建议.该软件系统在VC6.0平台下进行开发,利用Access数据库对健康监护过程中的所有信息进行存储和管理,建立个人的健康档案,并通过ADO技术实现应用程序对数据库的访问.结果:通过该软件系

  6. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  7. Microsoft Access Database Assist Data Processing for Physical Chemistry Experiments%Microsoft Access 数据库辅助物理化学实验数据处理

    Institute of Scientific and Technical Information of China (English)

    王松涛; 任庆云; 李玉环

    2014-01-01

    Microsoft Access数据库是办公软件Microsoft office的一部分。在Microsoft Access数据库基础上的采用VB语言开发的“物理化学实验数据处理程序”,既具有处理实验数据、作图、打印功能,又具有存储和管理的功能,确保实验数据的真实和可靠,便于对实验数据的管理、分析、统计等,从而提高实验教学的质量。%Microsoft access database was a part of the microsoft office software.On the basis of visual basic language , the physical chemistry experimental data processing program was developed , which had the functions of processing experimental data , drawing , printing , storage , management , furthermore ensuring true and reliable experimental data , facilitate to management , analysis statistics etc.so as to improving the quality of experimental teaching.

  8. Assessment of LTE Wireless Access for Monitoring of Energy Distribution in the Smart Grid

    DEFF Research Database (Denmark)

    Madueño, Germán Corrales; Nielsen, Jimmy Jessen; Min Kim, Dong;

    2016-01-01

    ) assigning RAOs more frequently sometimes worsens performance; and 2) the additional signaling that follows the ARP has very large impact on the capacity in terms of the number of supported devices; we observed a reduction in the capacity by almost a factor of 3. This suggests that a lightweight access...

  9. Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret

    Science.gov (United States)

    2010-06-08

    Sadler, “A Survey of Dynamic Spectrum Access,” IEEE Signal Proc. Mag., vol. 24, no. 3, pp. 79–89, 2007. [3] N. Cesa -Bianchi and G. Lugosi, Prediction...1995. [11] P. Auer, N. Cesa -Bianchi, and P. Fischer, “Finite-time Analysis of the Multiarmed Bandit Problem,” Machine Learning, vol. 47, no. 2, pp

  10. 經由校園網路存取圖書館光碟資料庫之研究 Studies on Multiuser Access Library CD-ROM Database via Campus Network

    Directory of Open Access Journals (Sweden)

    Ruey-shun Chen

    1992-06-01

    Full Text Available 無Library CD-ROM with its enormous storage, retrieval capabilities and reasonable price. It has been gradually replacing some of its printed counterpart. But one of the greatest limitation on the use of stand-alone CD-ROM workstation is that only one user can access the CD-ROM database at a time. This paper is proposed a new method to solve this problem. The method use personal computer via standard network system Ethernet high speed fiber network FADDY and standard protocol TCP/IP can access library CD-ROM database and perform a practical CD-ROM campus network system. Its advantage reduce redundant CD-ROM purchase fee and reduce damage by handed in and out and allows multiuser to access the same CD-ROM disc simultaneously.

  11. A Hybrid Networking Model for the Access Layer of the Communication Network for Distribution in Smart Grid

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2016-01-01

    Full Text Available The access layer in the communication network for distribution is an important link in the automation of smart distribution power grid. In current access layer of communication network for distribution in Chinese power grid systems, several communication methods like optical fiber, mediumvoltage carrier communication, 1.8GHz TD-LTE power private wireless network, 230MHz TD-LTE power private wireless network, public wireless network are constructed concurrently and running simultaneously in an identical power supply area. This traditional networking model will cause repeated construction and operation and maintenance difficulties in the communication network of power grid. On the basis of giving a detailed analysis of the radio link budget of TD-LTE power private wireless network in two frequencies, this paper present a multi-communication methods hybrid networking model, which gives a clear boundary for different communication methods based on the isoline with equal signal strength of the TD-LTE power private wireless network and accomplish the optimization of communication resources for distribution.

  12. Spatial Analysis of the Distribution, Risk Factors and Access to Medical Resources of Patients with Hepatitis B in Shenzhen, China

    Directory of Open Access Journals (Sweden)

    Yuliang Xi

    2014-11-01

    Full Text Available Considering the high morbidity of hepatitis B in China, many epidemiological studies based on classic medical statistical analysis have been started but lack spatial information. However, spatial information such as the spatial distribution, autocorrelation and risk factors of the disease is of great help in studying patients with hepatitis B. This study examined 2851 cases of hepatitis B that were hospitalized in Shenzhen in 2010 and studied the spatial distribution, risk factors and spatial access to health services using spatial interpolation, Pearson correlation analysis and the improved two-step floating catchment area method. The results showed that the spatial distribution of hepatitis B, along with risk factors as well as spatial access to the regional medical resources, was uneven and mainly concentrated in the south and southwest of Shenzhen in 2010. In addition, the distribution characteristics of hepatitis B revealed a positive correlation between four types of service establishments and risk factors for the disease. The Pearson correlation coefficients are 0.566, 0.515, 0.626, 0.538 corresponding to bath centres, beauty salons, massage parlours and pedicure parlours (p < 0.05. Additionally, the allocation of medical resources for hepatitis B is adequate, as most patients could be treated at nearby hospitals.

  13. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  14. Distribution of Ankle-Brachial Index among Inpatients with Cardiovascular Disease: Analysis Using the Kumamoto University Hospital Medical Database

    Science.gov (United States)

    Soejima, Hirofumi; Kojima, Sunao; Kaikita, Koichi; Yamamuro, Megumi; Izumiya, Yasuhiro; Tsujita, Kenichi; Yamamoto, Eiichiro; Tanaka, Tomoko; Sugamura, Koichi; Arima, Yuichiro; Sakamoto, Kenji; Akasaka, Tomonori; Tabata, Noriaki; Sueta, Daisuke; Miyoshi, Izuru; Usami, Makiko; Ogawa, Hisao

    2016-01-01

    Objective: To describe the distribution of ankle-brachial index (ABI) among Japanese cardiovascular inpatients and to explore risk factors of peripheral arterial disease (PAD) associated with ABI ≤0.9. Materials and Methods: This study was a retrospective analysis using clinical record databases of patients with cardiovascular disease admitted to the Department of Cardiovascular Medicine, Kumamoto University Hospital between 2007 and 2014. Results: Of 3639 patients included in the analysis, male patients accounted for 62.1% and the mean age of patients was 66.1 years. Ischemic heart disease (IHD) was observed in 49.1%. ABI ≤0.9 was observed in 11.3% of all patients, 14.1% in the IHD group and 8.5% in the non-IHD group. Age of ≥65 years (odds ratio [OR]: 2.93, 95% confidence interval [CI]: 2.22–3.86), current smoking (OR: 2.28, 95%CI:1.71–3.04), diabetes (OR: 2.15, 95%CI:1.71–2.71), hypertension (OR: 1.42, 95%CI:1.12–1.81) and chronic kidney disease (OR: 2.52, 95%CI:1.82–3.48) were significantly associated factors with ABI ≤0.9. Conclusions: This study suggests that PAD is prevalent even in patients without IHD. Active management of risk factors, early detection of PAD based on ABI, and therapeutic intervention could be effective in preventing future cardiovascular events or death. PMID:27087869

  15. MULTI-AGENT CO-ORDINATION IN DISTRIBUTED E-LEARNING ENVIRONMENTS: PROVIDING ACCESS PERMISSIONS

    Directory of Open Access Journals (Sweden)

    Shantha Visalakshi. U

    2013-04-01

    Full Text Available E-Learning, the new technology of supporting education and training, in recent times has been gaining a lot of attention. Content retrieval in e-learning refers the way by which the learning content is provided by means of electronic medium. It is an effective web-based learning paradigm, where many agents can be assigned with unique responsibilities to cope-up with the content retrieval by various users. Agent based system can manage the information stored in the e-learning environment, accessing and granting access permissions. Each task can be carried out by an autonomous agent and various such agents are grouped to form a multi-agent based system. Existing architectures do not consider the enhanced security measures, where it holds security as one among the agents. In the proposed architecture, security is enhanced at the network level and it wraps up all the other agents providing an enhanced security.

  16. Subsidized optimal ART for HIV-positive temporary residents of Australia improves virological outcomes: results from the Australian HIV Observational Database Temporary Residents Access Study

    Directory of Open Access Journals (Sweden)

    Kathy Petoumenos

    2015-02-01

    Full Text Available Introduction: HIV-positive (HIV+ temporary residents living in Australia legally are unable to access government subsidized antiretroviral treatment (ART which is provided via Medicare to Australian citizens and permanent residents. Currently, there is no information systematically being collected on non-Medicare eligible HIV+ patients in Australia. The objectives of this study are to describe the population recruited to the Australian HIV Observational Database (AHOD Temporary Residents Access Study (ATRAS and to determine the short- and long-term outcomes of receiving (subsidized optimal ART and the impact on onwards HIV transmission. Methods: ATRAS was established in 2011. Eligible patients were recruited via the AHOD network. Key HIV-related characteristics were recorded at baseline and prospectively. Additional visa-related information was also recorded at baseline, and updated annually. Descriptive statistics were used to describe the ATRAS cohort in terms of visa status by key demographic characteristics, including sex, region of birth, and HIV disease status. CD4 cell count (mean and SD and the proportion with undetectable (<50 copies/ml HIV viral load are reported at baseline, 6 and 12 months of follow-up. We also estimate the proportion reduction of onward HIV transmission based on the reduction in proportion of people with detectable HIV viral load. Results: A total of 180 patients were recruited to ATRAS by June 2012, and by July 2013 39 patients no longer required ART via ATRAS, 35 of whom became eligible for Medicare-funded medication. At enrolment, 63% of ATRAS patients were receiving ART from alternative sources, 47% had an undetectable HIV viral load (<50 copies/ml and the median CD4 cell count was 343 cells/µl (IQR: 222–479. At 12 months of follow-up, 85% had an undetectable viral load. We estimated a 75% reduction in the risk of onward HIV transmission with the improved rate of undetectable viral load. Conclusions: The

  17. Geographic distribution of need and access to health care in rural population: an ecological study in Iran

    Directory of Open Access Journals (Sweden)

    Najafi Behzad

    2011-09-01

    Full Text Available Abstract Introduction Equity in access to and utilization of health services is a common goal of policy-makers in most countries. The current study aimed to evaluate the distribution of need and access to health care services among Iran's rural population between 2006 and 2009. Methods Census data on population's characteristics in each province were obtained from the Statistical Centre of Iran and National Organization for civil registration. Data about the Rural Health Houses (RHHs were obtained from the Ministry of Health. The Health Houses-to-rural population ratio (RHP, crude birth rate (CBR and crude mortality rate (CMR in rural population were calculated in order to compare their distribution among the provinces. Lorenz curves of RHHs, CMR and CBR were plotted and their decile ratio, Gini Index and Index of Dissimilarity were calculated. Moreover, Spearman rank-order correlation was used to examine the relation between RHHs and CMR and CBR. Results There were substantial differences in RHHs, CMR and CBR across the provinces. CMR and CBR experienced changes toward more equal distributions between 2006 and 2009, while inverse trend was seen for RHHs. Excluding three provinces with markedly changes in data between 2006 and 2009 as outliers, did not change observed trends. Moreover; there was a significant positive relationship between CMR and RHP in 2009 and a significant negative association between CBR and RHP in 2006 and 2009. When three provinces with outliers were excluded, these significant associations were disappeared. Conclusion Results showed that there were significant variations in the distribution of RHHs, CMR and CBR across the country. Moreover, the distribution of RHHs did not reflect the needs for health care in terms of CMR and CBR in the study period.

  18. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    OpenAIRE

    2013-01-01

    The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the be...

  19. CracidMex1: a comprehensive database of global occurrences of cracids (Aves, Galliformes with distribution in Mexico

    Directory of Open Access Journals (Sweden)

    Gonzalo Pinilla-Buitrago

    2014-06-01

    Full Text Available Cracids are among the most vulnerable groups of Neotropical birds. Almost half of the species of this family are included in a conservation risk category. Twelve taxa occur in Mexico, six of which are considered at risk at national level and two are globally endangered. Therefore, it is imperative that high quality, comprehensive, and high-resolution spatial data on the occurrence of these taxa are made available as a valuable tool in the process of defining appropriate management strategies for conservation at a local and global level. We constructed the CracidMex1 database by collating global records of all cracid taxa that occur in Mexico from available electronic databases, museum specimens, publications, “grey literature”, and unpublished records. We generated a database with 23,896 clean, validated, and standardized geographic records. Database quality control was an iterative process that commenced with the consolidation and elimination of duplicate records, followed by the geo-referencing of records when necessary, and their taxonomic and geographic validation using GIS tools and expert knowledge. We followed the geo-referencing protocol proposed by the Mexican National Commission for the Use and Conservation of Biodiversity. We could not estimate the geographic coordinates of 981 records due to inconsistencies or lack of sufficient information in the description of the locality.Given that current records for most of the taxa have some degree of distributional bias, with redundancies at different spatial scales, the CracidMex1 database has allowed us to detect areas where more sampling effort is required to have a better representation of the global spatial occurrence of these cracids. We also found that particular attention needs to be given to taxa identification in those areas where congeners or conspecifics co-occur in order to avoid taxonomic uncertainty. The construction of the CracidMex1 database represents the first

  20. Distributed Multi-Sensor Real-Time Building Environmental Parameters Monitoring System with Remote Data Access

    Directory of Open Access Journals (Sweden)

    Beinarts Ivars

    2014-12-01

    Full Text Available In this paper the advanced monitoring system of multiple environmental parameters is presented. The purpose of the system is a long-term estimation of energy efficiency and sustainability for the research test stands which are made of different building materials. Construction of test stands, and placement of main sensors are presented in the first chapter. The structure of data acquisition system includes a real-time interface with sensors and a data logger that allows to acquire and log data from all sensors with fixed rate. The data logging system provides a remote access to the processing of the acquired data and carries out periodical saving at a remote FTP server using an Internet connection. The system architecture and the usage of sensors are explained in the second chapter. In the third chapter implementation of the system, different interfaces of sensors and energy measuring devices are discussed and several examples of data logger program are presented. Each data logger is reading data from analog and digital channels. Measurements can be displayed directly on a screen using WEB access or using data from FTP server. Measurements and acquired data graphical results are presented in the fourth chapter in the selected diagrams. The benefits of the developed system are presented in the conclusion.

  1. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Internet access to certain off-site... DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... elements in the risk management plan database available on the Internet: (a) The concentration of...

  2. InfoAccess - platform for the distribution of Southern African information

    CSIR Research Space (South Africa)

    McGillivray, R

    1993-01-01

    Full Text Available Local information distribution in electronic form in South Africa is a new and growing industry. Research shows that in any country the needs of users of information are largely for local information and to a lesser extent for overseas or external...

  3. Reputation-based ontology alignment for autonomy and interoperability in distributed access control

    NARCIS (Netherlands)

    Trivellato, Daniel; Spiessens, Fred; Zannone, Nicola; Etalle, Sandro

    2009-01-01

    Vocabulary alignment is a main challenge in distributedaccess control as peers should understand each other’spolicies unambiguously. Ontologies enable mutual understanding among peers by providing a precise semantics to concepts and relationships in a domain. However, due to the distributed nature

  4. Reputation-based ontology alignment for autonomy and interoperability in distributed access control

    NARCIS (Netherlands)

    Trivellato, Daniel; Spiessens, Fred; Zannone, Nicola; Etalle, Sandro

    2009-01-01

    Vocabulary alignment is a main challenge in distributedaccess control as peers should understand each other’spolicies unambiguously. Ontologies enable mutual understanding among peers by providing a precise semantics to concepts and relationships in a domain. However, due to the distributed nature o

  5. hQT*: A Scalable Distributed Data Structure for High-Performance Spatial Access

    NARCIS (Netherlands)

    Karlsson, J.S.

    1998-01-01

    Spatial data storage stresses the capability of conventional DBMSs. We present a scalable distributed data structure, hQTs, which offers support for efficient spatial point and range queries using order preserving hashing. It is designed to deal with skewed data and extends results obtained with sca

  6. An Algorithm to Compute the Character Access Count Distribution for Pattern Matching Algorithms

    NARCIS (Netherlands)

    Marschall, T.; Rahmann, S.

    2011-01-01

    We propose a framework for the exact probabilistic analysis of window-based pattern matching algorithms, such as Boyer--Moore, Horspool, Backward DAWG Matching, Backward Oracle Matching, and more. In particular, we develop an algorithm that efficiently computes the distribution of a pattern matching

  7. 多域环境下的分布式RBAC模型%A distributed role-based access control model for multi-domain environments

    Institute of Scientific and Technical Information of China (English)

    洪帆; 朱贤; 邢光林

    2006-01-01

    Access control in multi-domain environments is an important question in building coalition between domains. Based on the RBAC access control model and the concepts of secure domain,the role delegation and role mapping are proposed, which support the third-party authorization. A distributed RBAC model is then presented. Finally implementation issues are discussed.

  8. Research on security access control model of the Web-based database of nonfer-rous metal physical & chemical properties%基于Web有色金属物性数据库安全访问控制模型

    Institute of Scientific and Technical Information of China (English)

    李尚勇; 谢刚; 俞小花; 周明

    2009-01-01

    针对基于Web的有色金属物性数据库的访问特点,分析了有色金属物性数据库在多层架构体系中存在的非法入侵、越权访问、信息重放攻击等安全性问题,提出了适应其软件架构要求的安全访问控制模型.并对所提出的安全模型分别进行了访问性能和安全性测试,测试结果表明,访问模型安全性较好,性能稳定.%According to accessing characteristic of the Web-based database of non-ferrous metal physical & chemical properties, the se-curity issues are discussed in Multi-tier Application Architecture of database of non-ferrous metal physical & chemical properties, such as illegal invasion, unauthorized access, and information replay attack. The security access control model which is adaptive to charac-teristic of Multi-tier Application Architecture is proposed and tested in accessing performance and security. The test results show that the model is better in accessing performance and stability.

  9. A Bayesian Game-Theoretic Approach for Distributed Resource Allocation in Fading Multiple Access Channels

    Directory of Open Access Journals (Sweden)

    Gaoning He

    2010-01-01

    Full Text Available A Bayesian game-theoretic model is developed to design and analyze the resource allocation problem in K-user fading multiple access channels (MACs, where the users are assumed to selfishly maximize their average achievable rates with incomplete information about the fading channel gains. In such a game-theoretic study, the central question is whether a Bayesian equilibrium exists, and if so, whether the network operates efficiently at the equilibrium point. We prove that there exists exactly one Bayesian equilibrium in our game. Furthermore, we study the network sum-rate maximization problem by assuming that the users coordinate according to a symmetric strategy profile. This result also serves as an upper bound for the Bayesian equilibrium. Finally, simulation results are provided to show the network efficiency at the unique Bayesian equilibrium and to compare it with other strategies.

  10. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  11. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  12. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  13. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  14. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  15. 分布式数据库存储子系统设计与实现%Distributed database storage subsystem design and implementation

    Institute of Scientific and Technical Information of China (English)

    刘晓丹

    2014-01-01

    本文通过对分布式关系型数据库的分析研究,开发了利用价值较高的分布式关系数据库系统CRDB,该系统最大限度地利用网络和磁盘的I/O能力,提高用户体验,并利用Linux 提供的API大大节约了系统开销。%This paper analyzes the research on distributed relational database,developed the use of higher-value distributed relational database system CRDB,the system is to maximize the use of the network and disk I/O capacity,improve user experience,and using Linux API provides significant savings in overhead.

  16. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  17. Design and Implementation of Security Access Logistics Database Based on Web Service%基于Web服务的物流数据库安全访问设计与实现

    Institute of Scientific and Technical Information of China (English)

    高峰

    2014-01-01

    访问安全是数据系统设计中的一个重要内容,针对物流管理数据库的访问特点,在分析了物流管理数据库的结构体系的基础上,提出一种基于Web服务的物流数据库安全访问模型。首先将模型划分为视图层、业务逻辑层、对象/关系映射层和数据层,然后针对每层访问机制设计相应的安全策略,最后对该模型进行仿真测试。测试结果表明,该物流数据库安全访问模型具有较好高安全性,较好的满足了数据库系统的安全性要求。%Access security is an important content in the design of database system, aiming at the access characteristics of logistics management database, analyses the architecture of logistics management foundation database, this paper puts forward a logistics database security model based on Web services. The model is divided into view house, house, business logic object relational mapping layer and data layer, and then each layer access mechanism is designed corresponding se-curity strategy, finally the simulation test is carried out for the model. The test results show that the proposed model has good high security, and can meet the safety requirements of the logistics database system.

  18. To explore the use of LabSQL database access technology in LabVIEW%LabVIEW中利用LabSQL对数据库访问技术的探讨

    Institute of Scientific and Technical Information of China (English)

    张璐

    2015-01-01

    与传统的编程方式相比,LabVIEW更加简单易学并且应用十分广泛。LabVIEW运用了图形化的编程语言,并且提供了丰富的库函数与图形界面组件,有效缩短了开发周期。但LabVIEW自身并不具备访问数据库的功能,这就需要用到其他的辅助技术来访问数据库。本文分析了常用的几种LabVIEW数据库访问方法,并对LabSQL这种辅助方法进行了详细阐述,以此体现利用LabSQL进行数据库访问的优势。%Compared with the traditional way of programming,LabVIEW is easy to learn and very wide application.LabVIEW uses a graphical programming language,and provides a graphical interface component library functions and rich,shorten the development cycle.But LabVIEW itself does not have access to the database function,which requires the use of assistive technology to access the database in other.This paper analyzes several commonmethods of access to LabVIEW database,and the LabSQL the auxiliary method in detail,which can embody the advantages of using LabSQL to access database.

  19. Access 2013 bible

    CERN Document Server

    Alexander, Michael

    2013-01-01

    A comprehensive reference to the updated and new features of Access 2013 As the world's most popular database management tool, Access enables you to organize, present, analyze, and share data as well as build powerful database solutions. However, databases can be complex. That's why you need the expert guidance in this comprehensive reference. Access 2013 Bible helps you gain a solid understanding of database purpose, construction, and application so that whether you're new to Access or looking to upgrade to the 2013 version, this well-rounded resource provides you with a th

  20. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  1. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  2. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1.

  3. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  4. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  5. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  6. ITS-90 Thermocouple Database

    Science.gov (United States)

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  7. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  8. The Bias-Corrected Taxonomic Distribution of Mission-Accessible Small Near-Earth Objects

    Science.gov (United States)

    Hinkle, Mary L.; Moskovitz, Nicholas; Trilling, David; Binzel, Richard P.; Thomas, Cristina; Christensen, Eric; DeMeo, Francesca; Person, Michael J.; Polishook, David; Willman, Mark

    2015-11-01

    Although they are thought to compose the majority of the Near-Earth object (NEO) population, the small (d GMOS at Gemini North & South observatories as well as the DeVeny spectrograph at Lowell Observatory's Discovery Channel Telescope. Archival data of 43 objects from the MIT-UH-IRTF Joint Campaign for NEO Spectral Reconnaissance (PI R. Binzel) were also used. Taxonomic classifications were obtained by fitting our spectra to the mean reflectance spectra of the Bus asteroid taxonomy (Bus & Binzel 2002). Small NEAs are the likely progenitors of meteorites; an improved understanding of the abundance of meteorite parent body types in the NEO population improves understanding of how the two populations are related as well as the biases Earth's atmosphere imposes upon the meteorite collection.We present classifications for these objects as well as results for the debiased distribution of taxa(as a proxy for composition) as a function of object size and compare to the observed fractions of ordinary chondritemeteorites and asteroids with d > 1 km. Amongst the smallest NEOs we find an unexpected distribution of taxonomic types that differs from both large NEOs and meteorites.We acknowledge funding support from NASA NEOO grant number NNX14AN82G.

  9. A Web-based multi-database system supporting distributed collaborative management and sharing of microarray experiment information.

    Science.gov (United States)

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments.

  10. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Directory of Open Access Journals (Sweden)

    Peng Jia

    Full Text Available Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals.The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation.The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing.There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of

  11. Access to pharmaceutical products in six European countries – analysis of different pharmaceutical distribution systems

    Directory of Open Access Journals (Sweden)

    Evelyn Walter

    2012-03-01

    Full Text Available OBJECTIVES: The aim of the study was to draw a comprehensive picture of the pharmaceutical wholesale sector, outlining its socio-economic importance compared to different distribution systems such as short-line wholesaling, direct sales from manufacturers, Reduced Wholesale Arrangements (RWA and Direct to Pharmacy (DTP arrangements. Its role is considered from an economic, effectiveness and, most importantly, a public health viewpoint with qualitative and quantitative methods, focusing on France, Germany, Italy, the Netherlands, Spain and the UK.METHODS: First, data has been sourced from annual GIRP and IMS-Health statistics; second, a systematic literature research verified the empirical findings; third, an online-questionnaire was directed to pharmacies. Further data have been sourced from a questionnaire, addressing GIRP-full-member associations and wholesale companies (return rate 86%.RESULTS: On a weighted average, pharmaceutical full-line wholesalers in the observed countries alone pre-finance € 10.2 bn over a period of 41 days the entire medicine-market and secure the cash-flow of the social-insurers (Germany: € 2.60 bn for 38 days; Italy: € 2.27 bn for 68 days; the UK: € 1.48 bn for 36 days; France: € 1.28 bn for 22 days; Spain: € 969.76 m for 27 days; the Netherlands: € 399.09 m for 30 days on average. On average, pharmaceutical full-line wholesalers are bundling products of 18.28 manufacturers per delivery. The process costs would increase by € 164,922.43 to € 171,510.06 per year, if there were no pharmaceutical full-line wholesalers. These additional costs would have to be paid by manufacturers, pharmacies and finally by patients. Regarding the satisfaction with different distribution models, the results of the online-questionnaire show that pharmacists in the observed countries are very satisfied with the distribution through their pharmaceutical full-line wholesalers.CONCLUSIONS: The study showed that

  12. Optimal Configuration of Fault-Tolerance Parameters for Distributed Server Access

    DEFF Research Database (Denmark)

    Daidone, Alessandro; Renier, Thibault; Bondavalli, Andrea

    2013-01-01

    Server replication is a common fault-tolerance strategy to improve transaction dependability for services in communications networks. In distributed architectures, fault-diagnosis and recovery are implemented via the interaction of the server replicas with the clients and other entities such as e...... in replicated server architectures. In order to obtain insight into the system behaviour, a set of relevant environment parameters and controllable fault-tolerance parameters are chosen and the dependability/performance trade-off is evaluated....... such as enhanced name servers. Such architectures provide an increased number of redundancy configuration choices. The influence of a (wide area) network connection can be quite significant and induce trade-offs between dependability and user-perceived performance. This paper develops a quantitative stochastic...

  13. The Coral Reef Temperature Anomaly Database (CoRTAD) - Global, 4 km, Sea Surface Temperature and Related Thermal Stress Metrics for 1985-2005 (NCEI Accession 0044419)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coral Reef Temperature Anomaly Database (CoRTAD) is a collection of sea surface temperature (SST) and related thermal stress metrics, developed specifically for...

  14. A database on flash flood events in Campania, southern Italy, with an evaluation of their spatial and temporal distribution

    Science.gov (United States)

    Vennari, Carmela; Parise, Mario; Santangelo, Nicoletta; Santo, Antonio

    2016-11-01

    This study presents an historical database of flash flood events in the Campania region of southern Italy. The study focuses on small catchments characterized by intermittent flow, generally occurring during and after heavy rainstorms, which can be hydrologically defined as small Mediterranean catchments. As the outlet zones of these catchments (consisting mainly of alluvial fans or fan deltas) are highly urbanized in Campania, the population living in the delivery areas is exposed to high risk. Detailed scrutiny and critical analysis of the existing literature, and of the data inventory available, allowed us to build a robust database consisting of about 500 events from 1540 to 2015, which is continuously updated. Since this study is the first step of a longer project to perform a hazard analysis, information about time and site of occurrence is known for all events. As for the hazard analysis envisaged, collecting information about past events could provide information on future events, in terms of damage and also spatial and temporal occurrence. After introducing the issue of flash floods in Italy we then describe the geological and geomorphological settings of the study area. The database is then presented, illustrating the methodology used in collecting information and its general structure. The collected data are then discussed and the statistical data analysis presented.

  15. KAPPA: A Package for Synthesis of optically thin spectra for the non-Maxwellian kappa-distributions based on the CHIANTI database

    CERN Document Server

    Dzifčáková, Elena; Kotrč, Pavel; Fárník, František; Zemanová, Alena

    2015-01-01

    The non-Maxwellian $\\kappa$-distributions have been detected in the solar transition region and flares. These distributions are characterized by a high-energy tail and a near-Maxwellian core and are known to have significant impact on the resulting optically thin spectra arising from collisionally dominated astrophysical plasmas. We developed the KAPPA package (http://kappa.asu.cas.cz) for synthesis of such line and continuum spectra. The package is based on the freely available CHIANTI database and software, and can be used in a similar manner. Ionization and recombination rates together with the ionization equilibria are provided for a range of $\\kappa$ values. Distribution-averaged collision strengths for excitation are obtained by an approximate method for all transitions in all ions available within CHIANTI. The validity of this approximate method is tested by comparison with direct calculations. Typical precisions of better than 5% are found, with all cases being within 10%. Tools for calculation of syn...

  16. Framework for Deploying Client/Server Distributed Database System for effective Human Resource Information Management Systems in Imo State Civil Service of Nigeria

    Directory of Open Access Journals (Sweden)

    Josiah Ahaiwe

    2012-08-01

    Full Text Available The information system is an integrated system that holds financial and personnel records of persons working in various branches of Imo state civil service. The purpose is to harmonize operations, reduce or if possible eliminate redundancy and control the introduction of “ghost workers” and fraud in pension management. In this research work, an attempt is made to design a frame work for deploying a client/server distributed database system for a human resource information management system with a scope on Imo state civil service in Nigeria. The system consists of a relational database of personnel variables which could be shared by various levels of management in all the ministries’ and their branches located all over the state. The server is expected to be hosted in the accountant general’s office. The system is capable of handling recruitment and promotions issues, training, monthly remunerations, pension and gratuity issues, and employment history, etc.

  17. The Database State Machine Approach

    OpenAIRE

    1999-01-01

    Database replication protocols have historically been built on top of distributed database systems, and have consequently been designed and implemented using distributed transactional mechanisms, such as atomic commitment. We present the Database State Machine approach, a new way to deal with database replication in a cluster of servers. This approach relies on a powerful atomic broadcast primitive to propagate transactions between database servers, and alleviates the need for atomic comm...

  18. Popular Content Distribution in CR-VANETs with Joint Spectrum Sensing and Channel Access using Coalitional Games

    Directory of Open Access Journals (Sweden)

    Tianyu Wang

    2014-07-01

    Full Text Available Driven by both safety concerns and commercial interests, popular content distribution (PCD, as one of the key services offered by vehicular networks, has recently received considerable attention. In this paper, we address the PCD problem in highway scenarios, in which a popular file is distributed to a group of onboard units (OBUs driving through an area of interest (AoI. Due to high speeds of vehicles and deep fading of vehicle-to-roadside (V2R channels, the OBUs may not finish downloading the entire file. Consequently, a peer-to-peer (p2p network should be constructed among the OBUs for completing the file delivery process. Here, we apply the cognitive radio technique for vehicle-to-vehicle communications and propose a cooperative approach based on coalition formation games, which jointly considers the spectrum sensing and channel access performance. Simulation results show that our approach presents a considerable performance improvement compared with the non-cooperative case.

  19. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    Science.gov (United States)

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  20. Access数据库实现对象间数据“流动”的方法%Realization Method ofData Flow Among Objects in Access Database

    Institute of Scientific and Technical Information of China (English)

    张未未

    2016-01-01

    目前,Access 课程的讲授大多采取案例式或项目驱动式教学法。但在学生课后所完成的课程项目中,多数项目并不能真正实现数据库应用系统功能,依然是 Access 软件中各种对象的罗列与展示,而不能将不同对象依据业务逻辑关系有机的组织起来。针对此问题,总结出了在 Access 对象间实现数据关联与流动的方法。这些方法相对简单,易于掌握,可以更好地帮助学生理解 Access 各对象的基本功能和相互关系,从而设计并实现出更具逻辑性、功能性和完整性的数据库应用系统,达到进一步提高案例式和项目驱动式教学法教学效果的目的。%At present, case or project driven teaching methods are mostly used for Access course. But most of thecourse projects that students accomplished after class cannot really realize database application system function. All kinds of objects are listedand displayed in Access software, not organized according to the business logic relationship. This paper concludes the methods to realize data relation and data flow among objects in Access for this problem. These methods are relatively simple and easy to master.It can better help students understand the basic functions and relationships of each object in Access, so as to design and implement more logical, functional and complete database application system, which can further improve the teaching effect of case and project driven teaching methods.

  1. NESDIS OSPO Data Access Policy and CRM

    Science.gov (United States)

    Seybold, M. G.; Donoho, N. A.; McNamara, D.; Paquette, J.; Renkevens, T.

    2012-12-01

    The Office of Satellite and Product Operations (OSPO) is the NESDIS office responsible for satellite operations, product generation, and product distribution. Access to and distribution of OSPO data was formally established in a Data Access Policy dated February, 2011. An extension of the data access policy is the OSPO Customer Relationship Management (CRM) Database, which has been in development since 2008 and is reaching a critical level of maturity. This presentation will provide a summary of the data access policy and standard operating procedure (SOP) for handling data access requests. The tangential CRM database will be highlighted including the incident tracking system, reporting and notification capabilities, and the first comprehensive portfolio of NESDIS satellites, instruments, servers, applications, products, user organizations, and user contacts. Select examples of CRM data exploitation will show how OSPO is utilizing the CRM database to more closely satisfy the user community's satellite data needs with new product promotions, as well as new data and imagery distribution methods in OSPO's Environmental Satellite Processing Center (ESPC). In addition, user services and outreach initiatives from the Satellite Products and Services Division will be highlighted.

  2. Access 2010 for dummies

    CERN Document Server

    Ulrich Fuller, Laurie

    2010-01-01

    A friendly, step-by-step guide to the Microsoft Office database application Access may be the least understood and most challenging application in the Microsoft Office suite. This guide is designed to help anyone who lacks experience in creating and managing a database learn to use Access 2010 quickly and easily. In the classic For Dummies tradition, the book provides an education in Access, the interface, and the architecture of a database. It explains the process of building a database, linking information, sharing data, generating reports, and much more.As the Micr

  3. FRED, a Front End for Databases.

    Science.gov (United States)

    Crystal, Maurice I.; Jakobson, Gabriel E.

    1982-01-01

    FRED (a Front End for Databases) was conceived to alleviate data access difficulties posed by the heterogeneous nature of online databases. A hardware/software layer interposed between users and databases, it consists of three subsystems: user-interface, database-interface, and knowledge base. Architectural alternatives for this database machine…

  4. End-User Information-Seeking in the Energy Field: Implications for End-User Access to DOE/RECON Databases.

    Science.gov (United States)

    Case, Donald; And Others

    1986-01-01

    The U.S. Department of Energy (DOE) software research and development project described explores information seeking behavior of end-user energy experts and develops software offering tutorials on front-end use of DOE/RECON databases and active help in performing online searches. Software development is described and results of prototype testing…

  5. Observations of Deep-Sea Coral and Sponge Occurrences from the NOAA National Deep-Sea Coral and Sponge Database, 1842-Present (NCEI Accession 0145037)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA’s Deep-Sea Coral Research and Technology Program (DSC-RTP) compiles a national database of the known locations of deep-sea corals and sponges in U.S....

  6. Design and Implementation of Relation Database and Non-Relation Database Unified Access Model Based on HibernateOGM%基于Hibernate OGM的SQL与NoSQL数据库的统一访问模型的设计与实现

    Institute of Scientific and Technical Information of China (English)

    李东奎; 鄂海红

    2016-01-01

    For the existing relational database and non-relational database has its own application-specific API sce-narios, with the help of open-source framework Hibernate OG to establish a unified storage SQL database and non-NoSQL database framework, so that both of them Database read and write in a framework in accordance with the unified rules, without additional API view. Through the use of JAVA annotation distinction between the database type, JAVA object to fill data, Hibernate OGM framework automatically JAVA object analysis into point format, the underly-ing API package written by the original database engine. For the mixed use of HBase and MySQL scenarios, the above uniform access model for the experiment and validation. Experiments show that it is feasible to distinguish database types by JAVA annotation, populate data with JAVA objects, and unify database data storage through Hibernate OGM to implement SQL and NoSQL database.%针对将现有的关系型数据库和非关系型数据库都有自己的专用 API 的应用场景,借助于开源框架Hibernate OG建立了一个统一的存储SQL数据库和非NoSQL数据库的框架,使其对这两种数据库的读写能在一个框架下按照统一的规则进行,无需进行额外的API查看。通过使用JAVA的注解区别数据库类型、JAVA对象填充数据、Hibernate OGM框架自动将JAVA对象解析成点分格式、底层通过原生API封装写入数据库的引擎。针对混合使用HBase和MySQL的场景,对以上统一访问模型进行了实验和验证。实验表明,通过 JAVA注解区分数据库类型,通过JAVA对象填充数据,通过Hibernate OGM统一解析数据存储数据的这一系列的操作来进行对SQL和NoSQL数据库的方案是可行的。

  7. Creating database with Microsoft Access and its clinical application for patients with breast cancer%乳腺癌Access数据库的建立与临床应用

    Institute of Scientific and Technical Information of China (English)

    张仕义; 吴智勇; 林瑄; 章克毅; 郑海波; 郑春鹏; 李卓毅

    2008-01-01

    目的 应用Microsoft Access数据库保存和管理乳腺癌患者的临床资料,便于统计及分析数据.方法 选择1998年12月至2007年6月1177例乳腺癌患者的临床资料及治疗后定期复查的随访资料,结合乳腺癌诊治指南,应用Microsoft Access2003软件建立数据库,包括创建数据表、数据窗体及建立查询.结果 乳腺癌Access数据库具有友好的用户操作界面、可靠的数据管理方式及网络共享功能,可定时更新临床资料.结论 数据库建立后不但方便医生统计和分析临床资料,而且可减少数据登记错误及缩短数据查询时间.%Objective The purpose of this study was to develop a database program based on Microsoft Access 2003 for patients with breast cancer to save and manage clinical data.This makes it easier to analyze data of the database.Methods A total of 1177 cases with breast cancer who were performed surgical treatment from December 1998 to June 2007,the clinical data collection which included regular fouow-up data of these patients after therapy,the database was designed by using Microsoft Access 2003 included the creation of tables,forms and queries according to the therapy guidelines for breast cancer.Results The clinical data for patients with breast cancer can be stored in an Access database that is both user-friendly with reliable data administration and that can be shared in network,and it is capable of recording regular updated data.Conclusion This program will not only convenient for researchers to statistic and analyze their outcome data,but also for minimizing data entry errors and reducing the time spent on data query.

  8. Watershed Modeling Applications with the Open-Access Modular Distributed Watershed Educational Toolbox (MOD-WET) and Introductory Hydrology Textbook

    Science.gov (United States)

    Huning, L. S.; Margulis, S. A.

    2014-12-01

    Traditionally, introductory hydrology courses focus on hydrologic processes as independent or semi-independent concepts that are ultimately integrated into a watershed model near the end of the term. When an "off-the-shelf" watershed model is introduced in the curriculum, this approach can result in a potential disconnect between process-based hydrology and the inherent interconnectivity of processes within the water cycle. In order to curb this and reduce the learning curve associated with applying hydrologic concepts to complex real-world problems, we developed the open-access Modular Distributed Watershed Educational Toolbox (MOD-WET). The user-friendly, MATLAB-based toolbox contains the same physical equations for hydrological processes (i.e. precipitation, snow, radiation, evaporation, unsaturated flow, infiltration, groundwater, and runoff) that are presented in the companion e-textbook (http://aqua.seas.ucla.edu/margulis_intro_to_hydro_textbook.html) and taught in the classroom. The modular toolbox functions can be used by students to study individual hydrologic processes. These functions are integrated together to form a simple spatially-distributed watershed model, which reinforces a holistic understanding of how hydrologic processes are interconnected and modeled. Therefore when watershed modeling is introduced, students are already familiar with the fundamental building blocks that have been unified in the MOD-WET model. Extensive effort has been placed on the development of a highly modular and well-documented code that can be run on a personal computer within the commonly-used MATLAB environment. MOD-WET was designed to: 1) increase the qualitative and quantitative understanding of hydrological processes at the basin-scale and demonstrate how they vary with watershed properties, 2) emphasize applications of hydrologic concepts rather than computer programming, 3) elucidate the underlying physical processes that can often be obscured with a complicated

  9. A database of frequency distributions of energy depositions in small-size targets by electrons and ions.

    Science.gov (United States)

    Nikjoo, H; Uehara, S; Emfietzoglou, D; Pinsky, L

    2011-02-01

    Linear energy transfer (LET) is an average quantity, which cannot display the stochastics of the interactions of radiation tracks in the target volume. For this reason, microdosimetry distributions have been defined to overcome the LET shortcomings. In this paper, model calculations of frequency distributions for energy depositions in nanometre size targets, diameters 1-100 nm, and for a 1 μm diameter wall-less TEPC, for electrons, protons, alpha particles and carbon ions are reported. Frequency distributions for energy depositions in small-size targets with dimensions similar to those of biological molecules are useful for modelling and calculations of DNA damage. Monte Carlo track structure codes KURBUC and PITS99 were used to generate tracks of primary electrons 10 eV to 1 MeV, and ions 1 keV µm(-1) to 300 MeV µm(-1) energies. Distribution of absolute frequencies of energy depositions in volumes with diameters of 1-100 nm randomly positioned in unit density water irradiated with 1 Gy of the given radiation was obtained. Data are presented for frequency of energy depositions and microdosimetry quantities including mean lineal energy, dose mean lineal energy, frequency mean specific energy and dose mean specific energy. The modelling and calculations presented in this work are useful for characterisation of the quality of radiation beam in biophysical studies and in radiation therapy.

  10. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  11. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  12. 华南花岗岩数据库--一个在Microsoft Access中编写的实用小数据库%Database of the South China granites: A useful desktop database based on Microsoft Access

    Institute of Scientific and Technical Information of China (English)

    陈希清; 路远发

    2005-01-01

    在Microsoft Access 2000中编辑制作的华南花岗岩数据库,由24张表格和相应的表单、部分查询组成,可以方便地实现花岗岩数据的管理和GeoKit、Mapgis等软件对数据的引用处理;界面友好,操作简单,是花岗岩类数据收录、管理与综合分析的有用工具.

  13. Health insurance coverage, care accessibility and affordability for adult survivors of childhood cancer: a cross-sectional study of a nationally representative database.

    Science.gov (United States)

    Kuhlthau, Karen A; Nipp, Ryan D; Shui, Amy; Srichankij, Sean; Kirchhoff, Anne C; Galbraith, Alison A; Park, Elyse R

    2016-12-01

    We describe national patterns of health insurance coverage and care accessibility and affordability in a national sample of adult childhood cancer survivors (CCS) compared to adults without cancer. Using data from the 2010-2014 National Health Interview Survey (NHIS), we selected a sample of all CCS age 21 to 65 years old and a 1:3 matched sample of controls without a history of cancer. We examined insurance coverage, care accessibility and affordability in CCS and controls. We tested for differences in the groups in bivariate analyses and multivariable logistic regression models. Of all respondents age 21-65 in the full NHIS sample, 443 (0.35 %) were CCS. Fewer CCS were insured (76.4 %) compared to controls (81.4 %, p = 0.067). Significantly more CCS reported delaying medical care (24.7 vs 13.0 %), needing but not getting medical care in the previous 12 months (20.0 vs 10.0 %), and having trouble paying medical bills (40.3 vs 19.7 %) compared to controls (p health care accessibility and affordability. These analyses support the development of policies to assure that CCS have access to affordable services. Efforts to improve access to high-quality and affordable insurance for CCS may help reduce the gaps in getting medical care and problems with affordability. Health care providers should be aware that such problems exist and should discuss affordability and ability to obtain care with patients.

  14. The Oral Language Archive (OLA): A Digital Audio Database for Foreign Language Study.

    Science.gov (United States)

    Jones, Christopher M.

    1996-01-01

    Describes a project to create a centralized database of digitized sound accessible from distributed personal computers for the study of foreign languages. The system is also comprised of the tools to manage and distribute the sounds, a combination constituting a unique attempt to provide a universal structure to describe, categorize, and listen to…

  15. The distribution of late-Quaternary woody taxa in northern Eurasia: evidence from a new macrofossil database

    Science.gov (United States)

    Binney, Heather A.; Willis, Katherine J.; Edwards, Mary E.; Bhagwat, Shonil A.; Anderson, Patricia M.; Andreev, Andrei A.; Blaauw, Maarten; Damblon, Freddy; Haesaerts, Paul; Kienast, Frank; Kremenetski, Konstantin V.; Krivonogov, Sergey K.; Lozhkin, Anatoly V.; MacDonald, Glen M.; Novenko, Elena Y.; Oksanen, Pirita; Sapelko, Tatiana V.; Väliranta, Minna; Vazhenina, Ludmila

    2009-11-01

    We present a database of late-Quaternary plant macrofossil records for northern Eurasia (from 23° to 180°E and 46° to 76°N) comprising 281 localities, over 2300 samples and over 13,000 individual records. Samples are individually radiocarbon dated or are assigned ages via age models fitted to sequences of calibrated radiocarbon dates within a section. Tree species characteristic of modern northern forests (e.g. Picea, Larix, tree-Betula) are recorded at least intermittently from prior to the last glacial maximum (LGM), through the LGM and Lateglacial, to the Holocene, and some records locate trees close to the limits of the Scandinavian ice sheet, supporting the hypothesis that some taxa persisted in northern refugia during the last glacial cycle. Northern trees show differing spatio-temporal patterns across Siberia: deciduous trees were widespread in the Lateglacial, with individuals occurring across much of their contemporary ranges, while evergreen conifers expanded northwards to their range limits in the Holocene.

  16. A geographic distribution database of the Neotropical cassava whitefly complex (Hemiptera, Aleyrodidae) and their associated parasitoids and hyperparasitoids (Hymenoptera).

    Science.gov (United States)

    Vásquez-Ordóñez, Aymer Andrés; Hazzi, Nicolas A; Escobar-Prieto, David; Paz-Jojoa, Dario; Parsa, Soroush

    2015-01-01

    Whiteflies (Hemiptera, Aleyrodidae) are represented by more than 1,500 herbivorous species around the world. Some of them are notorious pests of cassava (Manihot esculenta), a primary food crop in the tropics. Particularly destructive is a complex of Neotropical cassava whiteflies whose distribution remains restricted to their native range. Despite their importance, neither their distribution, nor that of their associated parasitoids, is well documented. This paper therefore reports observational and specimen-based occurrence records of Neotropical cassava whiteflies and their associated parasitoids and hyperparasitoids. The dataset consists of 1,311 distribution records documented by the International Center for Tropical Agriculture (CIAT) between 1975 and 2012. The specimens are held at CIAT's Arthropod Reference Collection (CIATARC, Cali, Colombia). Eleven species of whiteflies, 14 species of parasitoids and one species of hyperparasitoids are reported. Approximately 66% of the whitefly records belong to Aleurotrachelus socialis and 16% to Bemisia tuberculata. The parasitoids with most records are Encarsia hispida, Amitus macgowni and Encarsia bellottii for Aleurotrachelus socialis; and Encarsia sophia for Bemisia tuberculata. The complete dataset is available in Darwin Core Archive format via the Global Biodiversity Information Facility (GBIF).

  17. Pro Access 2010 Development

    CERN Document Server

    Collins, Mark

    2011-01-01

    Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010 and the many sources of data available to your business. In this book, you'll learn how to build database applications, create Web-based databases, develop macros and Visual Basic for Applications (VBA) tools for Access applications, integrate Access with SharePoint and other business systems, and much more. Using a practical, hands-on approach, this book will take you through all the facets of developing Access-based solutions, such as data modeling, co

  18. Access 2013 for dummies

    CERN Document Server

    Ulrich Fuller, Laurie

    2013-01-01

    The easy guide to Microsoft Access returns with updates on the latest version! Microsoft Access allows you to store, organize, view, analyze, and share data; the new Access 2013 release enables you to build even more powerful, custom database solutions that integrate with the web and enterprise data sources. Access 2013 For Dummies covers all the new features of the latest version of Accessand serves as an ideal reference, combining the latest Access features with the basics of building usable databases. You'll learn how to create an app from the Welcome screen, get support

  19. Querying genomic databases

    Energy Technology Data Exchange (ETDEWEB)

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  20. The Global Terrestrial Network for Permafrost Database: metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution

    Science.gov (United States)

    Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.

    2015-03-01

    The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high