WorldWideScience

Sample records for building stronger databases

  1. Advanced information technology: Building stronger databases

    Energy Technology Data Exchange (ETDEWEB)

    Price, D. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    This paper discusses the attributes of the Advanced Information Technology (AIT) tool set, a database application builder designed at the Lawrence Livermore National Laboratory. AIT consists of a C library and several utilities that provide referential integrity across a database, interactive menu and field level help, and a code generator for building tightly controlled data entry support. AIT also provides for dynamic menu trees, report generation support, and creation of user groups. Composition of the library and utilities is discussed, along with relative strengths and weaknesses. In addition, an instantiation of the AIT tool set is presented using a specific application. Conclusions about the future and value of the tool set are then drawn based on the use of the tool set with that specific application.

  2. BUILDING STRONGER STATE ENERGY PARTNERSHIPS

    Energy Technology Data Exchange (ETDEWEB)

    David Terry

    2002-04-22

    program and building greater support among State Energy Office Directors. Second, NASEO would work to improve the efficiency of America's schools by assisting states and DOE in promoting projects that result in more energy efficient (and clean energy) schools and a better learning environment. Third, NASEO was to identify opportunities, needs, and priorities related to the emerging public benefit funds/programs operated by many states emerging from utility restructuring. This third activity, while still a high-priority for the state energy offices, has not been funded under this agreement. Thus, no activity will be reported. The results of the two funded efforts described above are a significant increase in the awareness of RBA resources and assistance, as well as a better understanding of successful approaches to implementing RBA activities. This technical progress report includes an update of the progress during the first year of cooperative agreement DE-FC26-00nt40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities.

  3. BUILDING STRONGER STATE ENERGY PARTNERSHIPS WITH THE U.S. DEPARTMENT OF ENERGY

    Energy Technology Data Exchange (ETDEWEB)

    Kate Burke

    2002-11-01

    This technical progress report includes an update of the progress during the second year of cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities.

  4. Building Stronger State Energy Partnerships with the U.S. Department of Energy

    Energy Technology Data Exchange (ETDEWEB)

    Marks, Kate

    2011-09-30

    This final technical report details the results of total work efforts and progress made from October 2007 – September 2011 under the National Association of State Energy Officials (NASEO) cooperative agreement DE-FC26-07NT43264, Building Stronger State Energy Partnerships with the U.S. Department of Energy. Major topical project areas in this final report include work efforts in the following areas: Energy Assurance and Critical Infrastructure, State and Regional Technical Assistance, Regional Initiative, Regional Coordination and Technical Assistance, and International Activities in China. All required deliverables have been provided to the National Energy Technology Laboratory and DOE program officials.

  5. Building Stronger State Energy Partnerships with the U.S. Department of Energy

    Energy Technology Data Exchange (ETDEWEB)

    David Terry

    2008-09-30

    This final technical report details the results of total work efforts and progress made from July 2000 - July 2008 under the National Association of State Energy Officials (NASEO) cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. Major topical project areas in this final report include work efforts in the following areas: Rebuild America/Energy Smart Schools, Higher Education Initiative, Winter/Summer Fuels Outlook Conferences, Energy Emergency, Clean Energy Integration, Energy Star, and Office of Electricity Delivery and Energy Reliability. All required deliverables have been provided to the National Energy Technology Laboratory and DOE program officials.

  6. Geospatial database for heritage building conservation

    International Nuclear Information System (INIS)

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed

  7. BUILDING STRONGER STATE ENERGY PARTNERSHIPS WITH THE U.S. DEPARTMENT OF ENERGY

    Energy Technology Data Exchange (ETDEWEB)

    Kate Burke

    2003-09-01

    This technical progress report includes an update of the progress during the third year of cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities. The approach of the project included three tasks during year three. First, NASEO and its Buildings Committee were to focus on raising awareness and coordination of Rebuild activities. Through education, one-on-one communications, and presentations at NASEO meetings and other events, staff and the committee will assist Rebuild officials in stimulating interest in the program and building greater support among State Energy Office Directors. The most recent subtasks added to the project, though not directly related to Rebuild America, fall under this initial task, and support: (a) state plans to implement integrated energy and environmental initiatives, including distributed generation technologies, and (b) initiation of a state collaborative on advanced turbines and hybrid systems. The advanced turbine piece was completed during this year. During the year, a new workplan was accepted by Rebuild America's Dan Sze to supplement the work in this task. This workplan is outlined below. Second, NASEO would work to improve the efficiency of America's schools by assisting states and DOE in promoting projects that result in more energy efficient and clean energy schools and a better learning environment. This task was fully completed during this year. The third task involves energy security issues which NASEO addressed by way of a Summer Fuels Outlook Conference held Tuesday, April 8, 2003. The purpose of this educational event was to inform state, federal, local, and other energy officials about the most recent transportation fuels data and trends. The public benefits part of this task was not funded for Year 3, thus no activity occurred.

  8. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  9. Application of OCR in Building Bibliographic Databases

    Directory of Open Access Journals (Sweden)

    A.R.D. Prasad

    1997-07-01

    Full Text Available Bibliographic databases tend to be very verbose and pose a problem to libraries due to the huge amount of data entry involved. In this situation, the two technologies that offer solutions are retro conversion and optical character recognition (OCR. The application of building an intelligent system for automatic identification of bibliographic elements like title, author, publisher, etc is discussed here. This paper also discusses the heuristics in identifying the elements and resolving conflicts that arise 'in situations where more than one bibliographic element satisfy the criteria specified for identifying the various elements. This work is being carried out at the DRTC with the financial assistance of NISSAT.

  10. Stronger synergies

    CERN Multimedia

    Antonella Del Rosso

    2012-01-01

    CERN was founded 58 years ago under the auspices of UNESCO. Since then, both organisations have grown to become world leaders in their respective fields. The links between the two have always existed but today they are even stronger, with new projects under way to develop a more efficient way of exchanging information and devise a common strategy on topics of mutual interest.   CERN and UNESCO are a perfect example of natural partners: their common field is science and education is one of the pillars on which both are built. Historically, they share a common heritage. Both UNESCO and CERN were born of the desire to use scientific cooperation to rebuild peace and security in the aftermath of the Second World War. "Recently, building on our common roots and in close collaboration with UNESCO, we have been developing more structured links to ensure the continuity of the actions taken over the years," says Maurizio Bona, who is in charge of CERN relations with international orga...

  11. Building Database-Powered Mobile Applications

    OpenAIRE

    Paul POCATILU

    2012-01-01

    Almost all mobile applications use persistency for their data. A common way for complex mobile applications is to store data in local relational databases. Almost all major mobile platforms include a relational database engine. These databases engines expose specific API (Application Programming Interface) to be used by mobile applications developers for data definition and manipulation. This paper focus on database-based application models for several mobile platforms (Android, Symbian, Wind...

  12. Building a dynamic Web/database interface

    OpenAIRE

    Cornell, Julie.

    1996-01-01

    Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...

  13. Building Database-Powered Mobile Applications

    Directory of Open Access Journals (Sweden)

    Paul POCATILU

    2012-01-01

    Full Text Available Almost all mobile applications use persistency for their data. A common way for complex mobile applications is to store data in local relational databases. Almost all major mobile platforms include a relational database engine. These databases engines expose specific API (Application Programming Interface to be used by mobile applications developers for data definition and manipulation. This paper focus on database-based application models for several mobile platforms (Android, Symbian, Windows CE/Mobile and Windows Phone. For each selected platform the API and specific database operations are presented.

  14. An Algorithm for Building an Electronic Database

    OpenAIRE

    Cohen, Wess A.; Gayle, Lloyd B.; Patel, Nima P.

    2016-01-01

    Objective: We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. Methods: The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperati...

  15. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  16. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  17. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  18. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.

  19. Crowdsource project on Building Indian Journals Database of Access Policies

    OpenAIRE

    Gutam, Sridhar; Gupta, C; Munigal, Achala; Adepu, Madhava Rao

    2016-01-01

    Crowdsource project on Building Indian Journals Database of  Access Policies   The aim of this project is to develop a database having all the ISSN Journals from India. The NISCAIR assigns ISSN number to all the journals in India and the list that is uploaded on their site at http://nsl.niscair.res.in/issn.jsp would be used for the database development. All the information related to the journals viz., year of start, language, place (city), publishers, print only, closed acce...

  20. Research on methods of designing and building digital seabed database

    Institute of Scientific and Technical Information of China (English)

    Su Tianyun; Liu Baohua; Zhai Shikui; Liang Ruicai; Zheng Yanpeng; Fu Qiang

    2007-01-01

    With a review of the recent development in digitalization and application of seabed data , this paper systematically proposed methods for integrating seabed data by analyzing its feature based on ORACLE database management system and advanced techniques of spatial data management. We did research on storage structure of seabed data, distributed- integrated database system, standardized spatial database and seabed metadata management system in order to effectively manage and use these seabed information in practical application . Finally , we applied the methods researched and proposed in this paper to build the Bohai Sea engineering geology database that stores engineering geology data and other seabed information from the Bohai Sea area . As a result , the Bohai Sea engineering geology database can effectively integrate huge amount of distributed and complicated seabed data to meet the practical requisition of Bohai Sea engineering geology environment exploration and exploitation.

  1. Strategies of Building a Stronger Sense of Community for Sustainable Neighborhoods: Comparing Neighborhood Accessibility with Community Empowerment Programs

    Directory of Open Access Journals (Sweden)

    Te-I Albert Tsai

    2014-05-01

    Full Text Available New Urbanist development in the U.S. aims at enhancing a sense of community and seeks to return to the design of early transitional neighborhoods which have pedestrian-oriented environments with retail shops and services within walking distances of housing. Meanwhile, 6000 of Taiwan’s community associations have been running community empowerment programs supported by the Council for Cultural Affairs that have helped many neighborhoods to rebuild so-called community cohesion. This research attempts to evaluate whether neighborhoods with facilities near housing and shorter travel distances within a neighborhood would promote stronger social interactions and form a better community attachment than neighborhoods that have various opportunities for residents to participate in either formal or informal social gatherings. After interviewing and surveying residents from 19 neighborhoods in Taipei’s Beitou District, and correlating the psychological sense of community with inner neighborhood’s daily travel distances and numbers of participatory activities held by community organizations under empowerment programs together with frequencies of regular individual visits and casual meetings, statistical evidence yielded that placing public facilities near residential locations is more effective than providing various programs for elevating a sense of community.

  2. building a comprehensive serials decision database at Virginia Tech

    OpenAIRE

    Metz, P.; Cosgriff, J.

    2000-01-01

    Although for many years academic libraries have relied on data on cost, library use, or citations to inform collection development decisions respecting serials, they have not fully exploited the possibilities for compiling numerous measures into comprehensive databases for decision support. The authors discuss the procedures used and the advantages realized from an effort to build such a resource at Virginia Polytechnic Institute and State University (Virginia Tech), where the available data ...

  3. Illinois hospital using Web to build database for relationship marketing.

    Science.gov (United States)

    Rees, T

    2000-01-01

    Silver Cross Hospital and Medical Centers, Joliet, Ill., is promoting its Web site as a tool for gathering health information about patients and prospective patients in order to build a relationship marketing database. The database will enable the hospital to identify health care needs of consumers in Joliet, Will County and many southwestern suburbs of Chicago. The Web site is promoted in a multimedia advertising campaign that invites residents to participate in a Healthy Living Quiz that rewards respondents with free health screenings. The effort is part of a growing planning and marketing strategy in the health care industry called customer relationship management (CRM). Not only does a total CRM plan offer health care organizations the chance to discover the potential for meeting consumers' needs; it also helps find any marketplace gaps that may exist. PMID:11184485

  4. Stronger Municipalities for Stronger Cities in Argentina

    OpenAIRE

    Rémy Prud'homme; Hervé Huntzinger; Pierre Kopp

    2004-01-01

    In recent years a number of studies have been devoted to the twin issues of economic development and of decentralization in Argentina. Many papers have tried to understand the complex system of intergovernmental relations. Most of them, however, have focussed on the role of provinces, and neglected the problems raised by municipalities. This paper tries to bridge this gap, and to suggest that stronger municipalities could contribute to produce stronger cities that would in turn foster economi...

  5. Building, Testing and Evaluating Database Clusters : OSA project

    OpenAIRE

    Kushanova, Olga

    2014-01-01

    The purpose of this study was to research idea and functionality of clustered database systems. Since relational databases started to lose their functionality in modern data size and manipulation a new solution had to be found to overcome the limitations. On one side the relational databases started to support clustered implementations, which made the database more reliable and helped to achieve better performance. On the other side, a totally new data store structure came with NoSQL movement...

  6. Lighter and stronger planes

    OpenAIRE

    Attard, Bonnie; Duca, Edward

    2015-01-01

    The price of fuel is a large cost burden on the aerospace industry. A lighter plane means cheaper flights, increased aircraft range, and less environmental pollution. http://www.um.edu.mt/think/lighter-and-stronger-planes/

  7. Keyless Entry: Building a Text Database Using OCR Technology.

    Science.gov (United States)

    Grotophorst, Clyde W.

    1989-01-01

    Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…

  8. Building Controls Software around an Object-Oriented Database

    CERN Document Server

    Kostro, K

    1997-01-01

    The use of Object-Oriented (OO) techniques has become popular in all areas of software technology and HEP control systems have not been excluded from this trend. In the course of modernisation of the CERN SPS Experimental Areas control software we designed and implemented an OO database to hold the configuration data for equipment and beams. With the beam lines and equipment defined in the new database, control facilities are being added by incrementally enhancing the classes and adding new methods to the database schema. Using the OO database helps to design the new system in a transparent way. Real-world objects such as beam lines or crates are uniquely mapped to the corresponding objects in the database. The new database allows seamless integration of data into programs written in OO languages such as C++ and Java. The WWW interface to the database gives a familiar look and feel and has been provided with relatively little effort. In this paper we present an overview of the project and the employed methods...

  9. Building a genome database using an object-oriented approach.

    Science.gov (United States)

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language. PMID:12542407

  10. Summary of Adsorption/Desorption Experiments for the European Database on Indoor Air Pollution Sources in Buildings

    DEFF Research Database (Denmark)

    Kjær, Ulla Dorte; Tirkkonen, T.

    1996-01-01

    Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings.......Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings....

  11. Building a Database from the SDSS Imaging Data

    Science.gov (United States)

    Dobos, L.; Csabai, I.; Budavári, T.; Szalay, A. S.

    2009-09-01

    We present our solution for organizing high volumes of astronomical imaging data in a relational database focusing on fast data retrieval and Virtual Observatory-style meta-data representation. Previous work by the authors showed that using relational databases for astronomical data can be very useful not only for reduced catalogs like SkyServer \\citep{szalay01} but also for high dimensional binary data like spectra \\citep[Spectrum Services,][]{dobos04} where meta-data naturally fits into the relational model while storing the binary blobs in the database yields a significant performance gain over ordinary flat files when batch data processing is an objective. We used a special high-bandwidth data transport protocol for downloading the whole SDSS dataset over the Internet, or about 8 TB of corrected imaging data. All necessary meta-data is stored along with the images. For sky coverage representation we used the Spherical Geometry Library of \\citet{budavari07} which supports fast lookup of images based on a reference celestial region. We are also working on a convenient user interface for the database as well as a web service that will support programmatic client access to the imaging data with standard Virtual Observatory protocols.

  12. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  13. Building an Organic Market Database - OrganicDataNetwork Training

    OpenAIRE

    Willer, Helga; Schaack, Diana

    2014-01-01

    About this training > The OrganicDataNetwork manual shows how a a database and the necessary tools for data processing of organic market data can be built. > The target group are collectors of organic market data. > The manual is a product of the OrganicDataNetwork project, which aims to improve European organic market data. > Further details are available in the manual in the OrganicDataNetwork website at www.organicdatanetwork.net.

  14. Research and application of ORACLE performance optimizing technologies for building airplane environment resource database

    Science.gov (United States)

    Zhang, Jianjun; Sun, Jianyong; Cheng, Conggao

    2013-03-01

    Many problems exist in processing experimental aircraft vibration (temperature, humidity) data and generating the intermediate calculations during the construction of airplane environment resource database, such as the need to deal with both structural and non-structural data, weak capacity of the client browser for data processing and massive network data transferring etc. To solve the above problems, some strategies on tuning and optimization performance of database are employed base on Oracle11g, which include data storage structure tuning, the memory configuration of the server, the disk I/O tuning and SQL statement tuning. The experimental results show that the performance of airplane environment resource database is enhanced about 80% compared with the database developed in the initial demonstration and validation phase. The application of new optimization strategies to the database construction can lay a sound foundation for finishing building airplane environment resource database.

  15. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework. PMID:17282834

  16. SAPE Database Building for a Security System Test Bed

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Kwangho; Kim, Woojin [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2013-05-15

    Physical protection to prevent radiological sabotage and the unauthorized removal of nuclear material is one of the important activities. Physical protection system (PPS) of nuclear facilities needs the effectiveness analysis. This effectiveness analysis of PPS is evaluated by the probability of blocking the attack at the most vulnerable path. Systematic Analysis of Physical Protection Effectiveness (SAPE) is one of a computer code developed for the vulnerable path analysis. SAPE is able to analyze based on the data of the experimental results that can be obtained through the Test Bed. In order to utilize the SAPE code, we conducted some field tests on several sensors and acquired data. This paper aims at describing the way of DB (database) establishment.

  17. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  18. Vespucci: a system for building annotated databases of nascent transcripts.

    Science.gov (United States)

    Allison, Karmel A; Kaikkonen, Minna U; Gaasterland, Terry; Glass, Christopher K

    2014-02-01

    Global run-on sequencing (GRO-seq) is a recent addition to the series of high-throughput sequencing methods that enables new insights into transcriptional dynamics within a cell. However, GRO-sequencing presents new algorithmic challenges, as existing analysis platforms for ChIP-seq and RNA-seq do not address the unique problem of identifying transcriptional units de novo from short reads located all across the genome. Here, we present a novel algorithm for de novo transcript identification from GRO-sequencing data, along with a system that determines transcript regions, stores them in a relational database and associates them with known reference annotations. We use this method to analyze GRO-sequencing data from primary mouse macrophages and derive novel quantitative insights into the extent and characteristics of non-coding transcription in mammalian cells. In doing so, we demonstrate that Vespucci expands existing annotations for mRNAs and lincRNAs by defining the primary transcript beyond the polyadenylation site. In addition, Vespucci generates assemblies for un-annotated non-coding RNAs such as those transcribed from enhancer-like elements. Vespucci thereby provides a robust system for defining, storing and analyzing diverse classes of primary RNA transcripts that are of increasing biological interest. PMID:24304890

  19. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  20. Building a GIS database in the Eastern Tennessee Seismic Zone

    Science.gov (United States)

    Akinpelu, M. O.; Vlahovic, G.; Arroucau, P.; Malhotra, R.; Powell, C. A.

    2010-12-01

    Eastern Tennessee contains one of the most seismically active regions in the eastern North America. The Eastern Tennessee Seismic Zone (ETSZ) is about 300 kilometers long and extends from northwestern Georgia through eastern Tennessee [Study Area: 34°N to 37°N; 86°W to 82.5°W]. It is the second most active earthquake zone of the United States east of the Rocky Mountains. Only the New Madrid Seismic Zone is releasing more seismic strain energy. Unlike the New Madrid Seismic Zone, the ETSZ did not experience any destructive earthquake in historical time; however, its seismogenic potential is not well understood. The spatial dimensions of the ETSZ and its association with potential field anomalies suggest that collecting and organizing all the relevant data into a GIS geodatabase could increase our understanding of that region. Geographic Information System (GIS) software can be used to acquire, share, maintain and modify geospatial data sets. In this work, ArcGIS 9.3.2 is used to build a geodatabase which includes topography, earthquake information such as locations, magnitudes and focal mechanisms, potential field data, P and S wave velocity anomalies inferred from local tomographic inversions of local events, seismic transects, digital geological maps and others relevant datasets. Raw datasets were downloaded from several earth science institutions and were edited before being imported to ArcGIS. Various geoprocessing techniques, such as geo-referencing, digitizing, and surface interpolation were used to manipulate and analyze these data. We show how this compilation can be used to analyze the spatial relationships between earthquake locations and other data layers. The long-term idea behind this project is to build an information resource that will be continuously updated and will eventually encompass data related to intraplate seismicity in the entire central and eastern United States. It will be made available to researchers, students, the general public and

  1. Lessons learned while building the Deepwater Horizon Database: Toward improved data sharing in coastal science

    Science.gov (United States)

    Thessen, Anne E.; McGinnis, Sean; North, Elizabeth W.

    2016-02-01

    Process studies and coupled-model validation efforts in geosciences often require integration of multiple data types across time and space. For example, improved prediction of hydrocarbon fate and transport is an important societal need which fundamentally relies upon synthesis of oceanography and hydrocarbon chemistry. Yet, there are no publically accessible databases which integrate these diverse data types in a georeferenced format, nor are there guidelines for developing such a database. The objective of this research was to analyze the process of building one such database to provide baseline information on data sources and data sharing and to document the challenges and solutions that arose during this major undertaking. The resulting Deepwater Horizon Database was approximately 2.4 GB in size and contained over 8 million georeferenced data points collected from industry, government databases, volunteer networks, and individual researchers. The major technical challenges that were overcome were reconciliation of terms, units, and quality flags which were necessary to effectively integrate the disparate data sets. Assembling this database required the development of relationships with individual researchers and data managers which often involved extensive e-mail contacts. The average number of emails exchanged per data set was 7.8. Of the 95 relevant data sets that were discovered, 38 (40%) were obtained, either in whole or in part. Over one third (36%) of the requests for data went unanswered. The majority of responses were received after the first request (64%) and within the first week of the first request (67%). Although fewer than half of the potentially relevant datasets were incorporated into the database, the level of sharing (40%) was high compared to some other disciplines where sharing can be as low as 10%. Our suggestions for building integrated databases include budgeting significant time for e-mail exchanges, being cognizant of the cost versus

  2. Attributes of the Federal Energy Management Program's Federal Site Building Characteristics Database

    Energy Technology Data Exchange (ETDEWEB)

    Loper, Susan A.; Sandusky, William F.

    2010-12-31

    Typically, the Federal building stock is referred to as a group of about one-half million buildings throughout the United States. Additional information beyond this level is generally limited to distribution of that total by agency and maybe distribution of the total by state. However, additional characterization of the Federal building stock is required as the Federal sector seeks ways to implement efficiency projects to reduce energy and water use intensity as mandated by legislation and Executive Order. Using a Federal facility database that was assembled for use in a geographic information system tool, additional characterization of the Federal building stock is provided including information regarding the geographical distribution of sites, building counts and percentage of total by agency, distribution of sites and building totals by agency, distribution of building count and floor space by Federal building type classification by agency, and rank ordering of sites, buildings, and floor space by state. A case study is provided regarding how the building stock has changed for the Department of Energy from 2000 through 2008.

  3. DEEP: A Database of Energy Efficiency Performance to Accelerate Energy Retrofitting of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof; Chen, Yixing; Piette, Mary Ann

    2015-05-01

    The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct

  4. Building up a collaborative article database out of Open Source components

    Directory of Open Access Journals (Sweden)

    Stefan Kandera

    2010-12-01

    Full Text Available Members of a Swiss, Austrian and German network of health care libraries planned to build a collaborative article reference database. Since different libraries were cataloging articles on their own, and many national health care journals can not be found in other repositories (free or commercial the goal was to merge existing collections and to enable participants to catalog articles on their own. As of November, 2010, the database http://bibnet.org contains 45,000 article references from 17 libraries. In this paper we will discuss how the software concept evolved and the problems we encountered during this process.

  5. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents...

  6. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  7. The Wenchuan earthquake creation of a rich database of building performance

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    After the Wenchuan earthquake,The Institute of Engineering Mechanics (IEM) performed an extensive and comprehensive damage survey of the large area affected by the earthquake. Seismic codes in China were revised and updated after the catastrophic 1976 Tangshan earthquake. However,until the Wenchuan earthquake the seismic code provisions were not tested by a large earthquake. Some 5000 buildings,exposed to intensities VI to XI,were investigated in great detail immediately after the earthquake. The investigation and the surveys covered both seismically designed (fortified) buildings and non-code compliant buildings. In the process a comprehensive and documented database of building performance was compiled,which would be very valuable for further research,improvement of the seismic code,improvement of the construction practices,and disaster mitigation planning and management. The database dominantly contains the most prevalent structural types in the region: 1) reinforced and un-reinforced masonry structures; 2) masonry buildings with reinforced frame in the lower stories,and 3) reinforced concrete frame structures. Observed damage characteristics of various structural types were studied and documented,damage patterns analyzed,and corresponding damage probability matrices derived from the data collected during this survey. It is our hope that this investigation and the published material will be utilized for the revision of the seismic codes,leading to a higher level of life safety and damage reduction in future earthquakes.

  8. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    OpenAIRE

    Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genea...

  9. Design and Building of the New Countryside Construction Database Based on ArcSDE and SQL Server

    Institute of Scientific and Technical Information of China (English)

    Hongji; ZHANG; Xuping; LI; Yong; LUO; Lianze; TENG; Aiqun; DAI

    2013-01-01

    Building the new countryside construction database plays an important role in improving the construction efficiency,and enhancing the level of major project management.On the basis of detailed analysis of features of the new countryside construction data,we give an overview of the database design based on ArcSDE and SQL Server,and elaborate the association between data classification organization,database conceptual design,logical design,spatial data,and thematic attribute data.Finally,taking the provincial new countryside demonstration zone in Yanjiang District of Sichuan Province for example,we build the new countryside construction database.

  10. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  11. Energy Performance Database of Building Heritage in the Region of Umbria, Central Italy

    Directory of Open Access Journals (Sweden)

    Cinzia Buratti

    2015-07-01

    Full Text Available Household energy consumption has been increasing in the last decades; the residential sector is responsible for about 40% of the total final energy use in Europe. Energy efficiency measures can both reduce energy needs of buildings and energy-related CO2 emissions. For this reason, in recent years, the European Union has been making efforts to enhance energy saving in buildings by introducing various policies and strategies; in this context, a common methodology was developed to assess and to certify energy performance of buildings. The positive effects obtained by energy efficiency measures need to be verified, but measuring and monitoring building energy performance is time consuming and financially demanding. Alternatively, energy efficiency can also be evaluated by specific indicators based on energy consumption. In this work, a methodology to investigate the level of energy efficiency reached in the Umbria Region (Central Italy is described, based on data collected by energy certificates. In fact, energy certificates, which are the outcomes of simulation models, represent a useful and available tool to collect data related to the energy use of dwellings. A database of building energy performance was developed, in which about 6500 energy certificates of residential buildings supplied by Umbria region were inserted. On the basis of this data collection, average energy and CO2 indicators related to the building heritage in Umbria were estimated and compared to national and international indicators derived from official sources. Results showed that the adopted methodology in this work can be an alternative method for the evaluation of energy indicators; in fact, the ones calculated considering simulation data were similar to the ones reported in national and international sources. This allowed to validate the adopted methodology and the efficiency of European policies.

  12. TECHNIQUES OF 3D COMPONENT DATABASE ESTABLISHING AND QUALITY CONTROL FOR WOODEN BUILDING

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the architectural survey project of “the Chi Lin Nunnery Redevelopment "in Hong Kong,this paper attempts to investigate the techniques of building 3D digital document of large_scale timber structure and quality con trol during construction by computer_based 3D simulation for the whole project.T here were several key issues including primary data acquisition,3D modeling and display,pre_assembling the total building and quality examination,etc.In this pa per,some useful experiments,such as the new applications of CCD digital cameras,image and graph processing software packages (CAD,Photoshop,Photomod eler,Vexcel,etc.) to the architectures are also presented.These methods intr oduced in this paper are suitable for image and graph integrated database buildi ng of complicated architectures,and useful for conveniently maintaining and rec onstructing the ancient architectures.

  13. THE SCHEME FOR THE DATABASE BUILDING AND UPDATING OF 1:10 000 DIGITAL ELEVATION MODELS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The National Bureau of Surveying and Mapping of China has planned to speed up the development of spatial data infrastructure (SDI) in the coming few years. This SDI consists of four types of digital products, i. e., digital orthophotos, digital elevation models,digital line graphs and digital raster graphs. For the DEM,a scheme for the database building and updating of 1:10 000 digital elevation models has been proposed and some experimental tests have also been accomplished. This paper describes the theoretical (and/or technical)background and reports some of the experimental results to support the scheme. Various aspects of the scheme such as accuracy, data sources, data sampling, spatial resolution, terrain modeling, data organization, etc are discussed.

  14. Greater Than The Sum of Its Parts:Building Up A Co-operative Database of Pearl River Delta Collection

    Institute of Scientific and Technical Information of China (English)

    PaulW.T.Poon; Ph.D.,F.L.A

    1994-01-01

    This paper starts with a brief description of what a database is followed by a short history of the development of the database system and its use; it also notes the proliferation of various kinds of databases in the 1990s. It then goes on to outline the background of establishing a Pearl River Delta Collection at the City University of Hong Kong and the Hong Kong Lingnan College. One of the tasks in this project is to build up a database of Pearl River Delta-related materials available in all the UPGC(University and Polytechnic Grant Committee)libraries in Hong Kong. The database design and structure are described, and the problems associated with data collection, source data, and updating together with their solutions are explained.

  15. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  16. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  17. Building a national perinatal database without the use of unique personal identifiers

    OpenAIRE

    Schnell, R.; Borgs, C

    2015-01-01

    To assess the quality of hospital care, national databases of standard medical procedures are common. A widely known example are national databases of births. If unique personal identification numbers are available (as in Scandinavian countries), the construction of such databases is trivial from a computational point of view. However, due to privacy legislation, such identifiers are not available in all countries. Given such constraints, the construction of a national perinatal database has ...

  18. Building Database Coordination in p2p Systems Using Eca Rules

    OpenAIRE

    Roshelova, Albena

    2004-01-01

    Recently, data integration systems and peer database management systems that attempt to model and integrate data in a peer-to-peer (p2p) environment have attracted the attention of researchers. Such systems give opportunities to the local relational database management system to exchange data with other nodes in a p2p environment. The databases systems in p2p are completely autonomous, heterogeneous and independent, each maintaining its own data. We would like to use these databases to answer...

  19. How to Build a Standardized Country-Specific Environmental Food Database for Nutritional Epidemiology Studies.

    Science.gov (United States)

    Bertoluci, Gwenola; Masset, Gabriel; Gomy, Catherine; Mottet, Julien; Darmon, Nicole

    2016-01-01

    There is a lack of standardized country-specific environmental data to combine with nutritional and dietary data for assessing the environmental impact of individual diets in epidemiology surveys, which are consequently reliant on environmental food datasets based on values retrieved from a heterogeneous literature. The aim of this study was to compare and assess the relative strengths and limits of a database of food greenhouse gas emissions (GHGE) values estimated with a hybrid method combining input/output and LCA approaches, with a dataset of GHGE values retrieved from the literature. France is the geographical perimeter considered in this study, but the methodology could be applied to other countries. The GHGE of 402 foodstuffs, representative of French diet, were estimated using the hybrid method. In parallel, the GHGE of individual foods were collected from existing literature. Median per-food-category GHGE values from the hybrid method and the reviewed literature were found to correlate strongly (Spearman correlation was 0.83), showing similar rankings of food categories. Median values were significantly different for only 5 (out of 29) food categories, including the ruminant meats category for which the hybrid method gave lower estimates than those from existing literature. Analysis also revealed that literature values came from heterogeneous studies that were not always sourced and that were conducted under different LCA modeling hypotheses. In contrast, the hybrid method helps build reliably-sourced, representative national standards for product-based datasets. We anticipate this hybrid method to be a starting point for better environmental impact assessments of diets. PMID:27054565

  20. Strategy and your stronger hand.

    Science.gov (United States)

    Moore, Geoffrey A

    2005-12-01

    There are two kinds of businesses in the world, says the author. Knowing what they are--and which one your company is--will guide you to the right strategic moves. One kind includes businesses that compete on a complex-systems model. These companies have large enterprises as their primary customers. They seek to grow a customer base in the thousands, with no more than a handful of transactions per customer per year (indeed, in some years there may be none), and the average price per transaction ranges from six to seven figures. In this model, 1,000 enterprises each paying dollar 1 million per year would generate dollar 1 billion in annual revenue. The other kind of business competes on a volume-operations model. Here, vendors seek to acquire millions of customers, with tens or even hundreds of transactions per customer per year, at an average price of relatively few dollars per transaction. Under this model, it would take 10 million customers each spending dollar 8 per month to generate nearly dollar 1 billion in revenue. An examination of both models shows that they could not be further apart in their approach to every step along the classic value chain. The problem, though, is that companies in one camp often attempt to create new value by venturing into the other. In doing so, they fail to realize how their managerial habits have been shaped by the model they've grown up with. By analogy, they have a "handedness"--the equivalent of a person's right- or left-hand dominance--that makes them as adroit in one mode as they are awkward in the other. Unless you are in an industry whose structure forces you to attempt ambidexterity (in which case, special efforts are required to manage the inevitable dropped balls), you'll be far more successful making moves that favor your stronger hand. PMID:16334582

  1. A Stronger Reason for the Right to Sign Languages

    Science.gov (United States)

    Trovato, Sara

    2013-01-01

    Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…

  2. EuroWordNet : building a multilingual WordNet database with semantic relations between words

    OpenAIRE

    Verdejo Maillo, María Felisa

    1996-01-01

    The project aims at developing a multilingual database with basic semantic relations between words for several European languages (Dutch, Italian and Spanish). The wordnets will be linked to the American WordNet for English and a shared top-ontology will be derived, while language specific properties are maintained in the individual wordnets. The database can be used for multilingual information retrieval which will be demonstrated by Novell Linguistic Development.

  3. Natural radioactivity in building materials in the European Union: a database and an estimate of radiological significance

    International Nuclear Information System (INIS)

    The authors set up a database of activity concentration measurements of natural radionuclides (226Ra, 232Th and 40K) in building material. It contains about 10,000 samples of both bulk material (bricks, concrete, cement, natural- and phosphogypsum, sedimentary and igneous bulk stones) and superficial material (igneous and metamorphic stones) used in the construction industry in most European Union Member States. The database allowed the authors to calculate the activity concentration index I – suggested by a European technical guidance document and recently used as a basis for elaborating the draft Euratom Basic Safety Standards Directive – for bricks, concrete and phosphogypsum used in the European Union. Moreover, the percentage could be assessed of materials possibly subject to restrictions, if either of the two dose criteria proposed by the technical guidance were to be adopted. - Highlights: ► A database of natural radioactivity in building material was set up. ► It contains data related to 10,000 samples of both products and materials in EU. ► The activity concentration index I, suggested by the EU RP112 was computed. ► The adoption of the dose criterion of 0.3 mSv y−1 of the EU RP112 is too ambitious. ► A health goal of 1 mSv y−1 appears more realistic.

  4. Energy Performance Database of Building Heritage in the Region of Umbria, Central Italy

    OpenAIRE

    Cinzia Buratti; Francesco Asdrubali; Domenico Palladino; Antonella Rotili

    2015-01-01

    Household energy consumption has been increasing in the last decades; the residential sector is responsible for about 40% of the total final energy use in Europe. Energy efficiency measures can both reduce energy needs of buildings and energy-related CO 2 emissions. For this reason, in recent years, the European Union has been making efforts to enhance energy saving in buildings by introducing various policies and strategies; in this context, a common methodology was developed to assess and t...

  5. Building a Nationwide Bibliographic Database: The Role of Local Shared Automated Systems.

    Science.gov (United States)

    Wetherbee, Louella V.

    1992-01-01

    Discusses the actual and potential impact of local shared automated library systems on the development of a comprehensive nationwide bibliographic database (NBD). Shared local automated systems are described; four local shared automated system models are compared; and the current interface between local shared automated library systems and the NBD…

  6. A bioinformatics pipeline to build a knowledge database for in silico antibody engineering.

    Science.gov (United States)

    Zhao, Shanrong; Lu, Jin

    2011-04-01

    A challenge to antibody engineering is the large number of positions and nature of variation and opposing concerns of introducing unfavorable biochemical properties. While large libraries are quite successful in identifying antibodies with improved binding or activity, still only a fraction of possibilities can be explored and that would require considerable effort. The vast array of natural antibody sequences provides a potential wealth of information on (1) selecting hotspots for variation, and (2) designing mutants to mimic natural variations seen in hotspots. The human immune system can generate an enormous diversity of immunoglobulins against an almost unlimited range of antigens by gene rearrangement of a limited number of germline variable, diversity and joining genes followed by somatic hypermutation and antigen selection. All the antibody sequences in NCBI database can be assigned to different germline genes. As a result, a position specific scoring matrix for each germline gene can be constructed by aligning all its member sequences and calculating the amino acid frequencies for each position. The position specific scoring matrix for each germline gene characterizes "hotspots" and the nature of variations, and thus reduces the sequence space of exploration in antibody engineering. We have developed a bioinformatics pipeline to conduct analysis of human antibody sequences, and generated a comprehensive knowledge database for in silico antibody engineering. The pipeline is fully automatic and the knowledge database can be refreshed anytime by re-running the pipeline. The refresh process is fast, typically taking 1min on a Lenovo ThinkPad T60 laptop with 3G memory. Our knowledge database consists of (1) the individual germline gene usage in generation of natural antibodies; (2) the CDR length distributions; and (3) the position specific scoring matrix for each germline gene. The knowledge database provides comprehensive support for antibody engineering

  7. Building an industry-wide occupational exposure database for respirable mineral dust - experiences from the IMA dust monitoring programme

    Science.gov (United States)

    Houba, Remko; Vlaanderen, Jelle; Jongen, Richard; Kromhout, Hans

    2009-02-01

    Building an industry-wide database with exposure measurements of respirable mineral dust is a challenging operation. The Industrial Minerals Association (IMA-Europe) took the initiative to create an exposure database filled with data from a prospective and ongoing dust monitoring programme that was launched in 2000. More than 20 industrial mineral companies have been collecting exposure data following a common protocol since then. Recently in 2007 ArboUnie and IRAS evaluated the quality of the collected exposure data for data collected up to winter 2005/2006. The data evaluated was collected in 11 sampling campaigns by 24 companies at 84 different worksites and considered about 8,500 respirable dust measurements and 7,500 respirable crystalline silica. In the quality assurance exercise four criteria were used to evaluate the existing measurement data: personal exposure measurements, unique worker identity, sampling duration not longer than one shift and availability of a limit of detection. Review of existing exposure data in the IMA dust monitoring programme database showed that 58% of collected respirable dust measurements and 62% of collected respirable quartz could be regarded as 'good quality data' meeting the four criteria mentioned above. Only one third of the measurement data included repeated measurements (within a sampling campaign) that would allow advanced statistical analysis incorporating estimates of within- and between-worker variability in exposure to respirable mineral dust. This data came from 7 companies comprising measurements from 23 sites. Problematic data was collected in some specific countries and to a large extent this was due to local practices and legislation (e.g. allowing 40-h time weighted averages). It was concluded that the potential of this unique industry-wide exposure database is very high, but that considerable improvements can be made. At the end of 2006 relatively small but essential changes were made in the dust monitoring

  8. Visual Localization in Urban Area Using Orthogonal Building Boundaries and a GIS Database

    Institute of Scientific and Technical Information of China (English)

    LI Haifeng; LIU Jingtai; LU Xiang

    2012-01-01

    A framework is presented for robustly estimating the location of a mobile robot in urban areas based on images extracted from a monocular onboard camera, given a 2D map with building outlines with neither 3D geometric information nor appearance data. The proposed method firstly reconstructs a set of vertical planes by sampling and clustering vertical lines from the image with random sample consensus (RANSAC), using the derived 1D homographies to inform the planar model. Then, an optimal autonomous localization algorithm based on the 2D building boundary map is proposed. The physical experiments are carried out to validate the robustness and accuracy of our localization approach.

  9. Building adisease-specific research database -longitudinal follow-up of thyroid cancer patients

    Czech Academy of Sciences Publication Activity Database

    Vejvalka, J.; Pejcha, T.; Varga, F.; Křenek, M.; Vlček, P.; Jirsa, Ladislav

    Bari : Politecnico di Bari, 2007 - (Sicurello, F.; Mastronardi, G.), s. 111-111 ISBN 978-88-95614-02-1. [Italian Association of Telemedicine and Medical Informatics --- 8th Congress. Bari (IT), 13.12.2007-15.12.2007] R&D Projects: GA AV ČR(CZ) 1ET100750404 Institutional research plan: CEZ:AV0Z10750506 Keywords : research database * thyroid cancer * longitudinal follow-up Subject RIV: IN - Informatics, Computer Science

  10. How to build a standardized country-specific environmental food database for nutritional epidemiology studies

    OpenAIRE

    Masset, Gabriel; Gomy, Catherine; Mottet, Julien; Darmon, Nicole

    2016-01-01

    There is a lack of standardized country-specific environmental data to combine with nutritional and dietary data for assessing the environmental impact of individual diets in epidemiology surveys, which are consequently reliant on environmental food datasets based on values retrieved from a heterogeneous literature. The aim of this study was to compare and assess the relative strengths and limits of a database of food greenhouse gas emissions (GHGE) values estimated with a hybrid method combi...

  11. Building Viewpoints in an Object-Based Representation System for Knowledge Discovery in Databases

    OpenAIRE

    Simon, Arnaud; Napoli, Amedeo

    1999-01-01

    In this paper, we present an approach to knowledge discovery in databases in the context of object-based representation systems. The goal of this approach is to extract viewpoints and association rules from data represented by objects. A viewpoint is a hierarchy of classes (a kind of partial lattice) and an association rule can be defined within a viewpoint or between two classes lying in different viewpoints. The viewpoints construction algorithm allows to manipulate objects which are indiff...

  12. Building a highly available and intrusion tolerant database security and protection system ( DSPS)

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performane analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections ; and that proactive security is very useful and practical for large, critical applications.

  13. The Fluka Linebuilder and Element Database: Tools for Building Complex Models of Accelerators Beam Lines

    CERN Document Server

    Mereghetti, A; Cerutti, F; Versaci, R; Vlachoudis, V

    2012-01-01

    Extended FLUKA models of accelerator beam lines can be extremely complex: heavy to manipulate, poorly versatile and prone to mismatched positioning. We developed a framework capable of creating the FLUKA model of an arbitrary portion of a given accelerator, starting from the optics configuration and a few other information provided by the user. The framework includes a builder (LineBuilder), an element database and a series of configuration and analysis scripts. The LineBuilder is a Python program aimed at dynamically assembling complex FLUKA models of accelerator beam lines: positions, magnetic fields and scorings are automatically set up, and geometry details such as apertures of collimators, tilting and misalignment of elements, beam pipes and tunnel geometries can be entered at user’s will. The element database (FEDB) is a collection of detailed FLUKA geometry models of machine elements. This framework has been widely used for recent LHC and SPS beam-machine interaction studies at CERN, and led to a dra...

  14. Building a fingerprint database for modern art materials: PIXE analysis of commercial painting and drawing media

    Science.gov (United States)

    Zucchiatti, A.; Climent-Font, A.; Gómez-Tejedor, J. García; Martina, S.; Muro García, C.; Gimeno, E.; Hernández, P.; Canelo, N.

    2015-11-01

    We have examined by PIXE (and by RBS in parallel) about 180 samples of commercial painting and drawing media including pencils, pastels, waxes, inks, paints and paper. Given the high PIXE sensitivity we produced X-ray spectra at low collected charges and currents, operating in good conservation conditions. For drawing media containing inorganic components or a unique marker element, we have defined colouring agent fingerprints which correspond, when applicable, to the composition declared by the manufacturer. For thin layers, the ratios of areal densities of elements are close to those expected given the declared composition, which is promising from the perspective of compiling the database. The quantitative PIXE and RBS analysis of part of the set of samples is provided.

  15. Improving the thermal integrity of new single-family detached residential buildings: Documentation for a regional database of capital costs and space conditioning load savings

    International Nuclear Information System (INIS)

    This report summarizes the costs and space-conditioning load savings from improving new single-family building shells. It relies on survey data from the National Association of Home-builders (NAHB) to assess current insulation practices for these new buildings, and NAHB cost data (aggregated to the Federal region level) to estimate the costs of improving new single-family buildings beyond current practice. Space-conditioning load savings are estimated using a database of loads for prototype buildings developed at Lawrence Berkeley Laboratory, adjusted to reflect population-weighted average weather in each of the ten federal regions and for the nation as a whole

  16. Brain Gym[R]: Building Stronger Brains or Wishful Thinking?

    Science.gov (United States)

    Hyatt, Keith J.

    2007-01-01

    As part of the accountability movement, schools are increasingly called upon to provide interventions that are based on sound scientific research and that provide measurable outcomes for children. Brain Gym[R] is a popular commercial program claiming that adherence to its regimen will result in more efficient learning in an almost miraculous…

  17. Building a Patient-Specific Risk Score with a Large Database of Discharge Summary Reports.

    Science.gov (United States)

    Qu, Zhi; Zhao, Lue Ping; Ma, Xiemin; Zhan, Siyan

    2016-01-01

    BACKGROUND There is increasing interest in clinical research with electronic medical data, but it often faces the challenges of heterogeneity between hospitals. Our objective was to develop a single numerical score for characterizing such heterogeneity via computing inpatient mortality in treating acute myocardial infarction (AMI) patients based on diagnostic information recorded in the database of Discharge Summary Reports (DSR). MATERIAL AND METHODS Using 4 216 135 DSRs of 49 tertiary hospitals from 2006 to 2010 in Beijing, more than 200 secondary diagnoses were identified to develop a risk score for AMI (n=50 531). This risk score was independently validated with 21 571 DSRs from 65 tertiary hospitals in 2012. The c-statistics of new risk score was computed as a measure of discrimination and was compared with the Charlson comorbidity index (CCI) and its adaptions for further validation. RESULTS We finally identified and weighted 22 secondary diagnoses using a logistic regression model. In the external validation, the novel risk score performed better than the widely used CCI in predicting in-hospital mortality of AMI patients (c-statistics: 0.829, 0.832, 0.824 vs. 0.775, 0.773, and 0.710 in training, testing, and validating dataset, respectively). CONCLUSIONS The new risk score developed from DSRs outperform the existing administrative data when applied to healthcare data from China. This risk score can be used for adjusting heterogeneity between hospitals when clinical data from multiple hospitals are included. PMID:27318825

  18. Towards Global QSAR Model Building for Acute Toxicity: Munro Database Case Study

    Directory of Open Access Journals (Sweden)

    Swapnil Chavan

    2014-10-01

    Full Text Available A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  19. The IMPEx Protocol - building bridges between scientific databases and online tools

    Science.gov (United States)

    Al-Ubaidi, T.; Khodachenko, M. L.; Kallio, E. J.; Génot, V.; Modolo, R.; Hess, S.; Schmidt, W.; Scherf, M.; Topf, F.; Alexeev, I. I.; Gangloff, M.; Budnik, E.; Bouchemit, M.; Renard, B.; Bourrel, N.; Penou, E.; André, N.; Belenkaya, E. S.

    2014-04-01

    The FP7-SPACE project IMPEx (http://impex-fp7.oeaw.ac.at) was established as a result of scientific collaboration between research teams from Austria, Finland, France, and Russia, working on the integration of a set of data mining, analysis and modeling tools in the field of space plasma and planetary physics. The primary goal of the project is to bridge the gap between spacecraft measurements and up-to-date computational models of planetary environments, enabling their joint operation for a better understanding of related physical phenomena. The IMPEx Protocol constitutes one of the cornerstones of the integration effort. While the IMPEx Data Model assures that the information exchanged can be 'understood' and hence processed by every participating tool or database system, the protocol provides the means to leverage specific functionalities of the respective host system in conjunction with the data provided. Examples thereof would be services for calculating field lines and particle trajectories, on-the-fly modeling runs with specific parameters and so forth. Additionally there are also utility methods available that allow to e.g. access specific data files or support search interfaces by providing ranked lists of stored modeling runs for a given set of (upstream) parameters. The presentation offers an overview of the IMPEx protocol and addresses the motivation for some of the (technical)design decisions taken during the development process. Further the resulting SOAP based web service interface is discussed and individual services and their applications are addressed specifically. Last but not least the first available implementations of the protocol are presented and a brief overview of tools already leveraging the IMPEx protocol is provided. The presentation closes with an outlook on possible future applications as well as extensions of the IMPEx protocol, including information on how to get started when implementing the IMPEx protocol, in order to join the

  20. Chemical reaction due to stronger Ramachandran interaction

    Indian Academy of Sciences (India)

    Andrew Das Arulsamy

    2014-05-01

    The origin of a chemical reaction between two reactant atoms is associated with the activation energy, on the assumption that, high-energy collisions between these atoms, are the ones that overcome the activation energy. Here, we show that a stronger attractive van der Waals (vdW) and electron-ion Coulomb interactions between two polarized atoms are responsible for initiating a chemical reaction, either before or after the collision. We derive this stronger vdW attraction formula exactly using the quasi one-dimensional Drude model within the ionization energy theory and the energy-level spacing renormalization group method. Along the way, we expose the precise physical mechanism responsible for the existence of a stronger vdW interaction for both long and short distances, and also show how to technically avoid the electron-electron Coulomb repulsion between polarized electrons from these two reactant atoms. Finally, we properly and correctly associate the existence of this stronger attraction with Ramachandran’s `normal limits’ (distance shorter than what is allowed by the standard vdW bond) between chemically nonbonded atoms.

  1. Building

    OpenAIRE

    Seavy, Ryan

    2014-01-01

    Building for concrete is temporary. The building of wood and steel stands against the concrete to give form and then gives way, leaving a trace of its existence behind. Concrete is not a building material. One does not build with concrete. One builds for concrete.

  2. Building a recruitment database for asthma trials:A conceptual framework for the creation of the UK Database of Asthma Research Volunteers

    OpenAIRE

    Nwaru, Bright I; Soyiri, Ireneous N; Simpson, Colin R; Griffiths, Chris; Sheikh, Aziz

    2016-01-01

    Background: Randomised clinical trials are the 'gold standard' for evaluating the effectiveness of healthcare interventions. However, successful recruitment of participants remains a key challenge for many trialists. In this paper, we present a conceptual framework for creating a digital, population-based database for the recruitment of asthma patients into future asthma trials in the UK. Having set up the database, the goal is to then make it available to support investigators planning asthm...

  3. Bibliometric analysis of Spanish scientific publications in the subject Construction & Building Technology in Web of Science database (1997-2008

    Directory of Open Access Journals (Sweden)

    Rojas-Sola, J. I.

    2010-12-01

    Full Text Available In this paper the publications from Spanish institutions listed in the journals of the Construction & Building Technology subject of Web of Science database for the period 1997- 2008 are analyzed. The number of journals in whose is published is 35 and the number of articles was 760 (Article or Review. Also a bibliometric assessment has done and we propose two new parameters: Weighted Impact Factor and Relative Impact Factor; also includes the number of citations and the number documents at the institutional level. Among the major production Institutions with greater scientific production, as expected, the Institute of Constructional Science Eduardo Torroja (CSIC, while taking into account the weighted impact factor ranks first University of Vigo. On the other hand, only two journals Cement and Concrete Materials and Materials de Construction agglutinate the 45.26% of the Spanish scientific production published in the Construction & Building Technology subject, with 172 papers each one. Regarding international cooperation, include countries such as England, Mexico, United States, Italy, Argentina and France.

    En este trabajo se analizan las publicaciones procedentes de instituciones españolas recogidas en las revistas de la categoría Construction & Building Technology de la base de datos Web of Science para el periodo 1997-2008. El número de revistas incluidas es de 35 y el número de artículos publicados ha sido de 760 (Article o Review. Se ha realizado una evaluación bibliométrica con dos nuevos parámetros: Factor de Impacto Ponderado y Factor de Impacto Relativo; asimismo se incluyen el número de citas y el número de documentos a nivel institucional. Entre los centros con una mayor producción científica destaca, como era de prever, el Instituto de Ciencias de la Construcción Eduardo Torroja (CSIC, mientras que atendiendo al Factor de Impacto Ponderado ocupa el primer lugar la Universidad de Vigo. Por otro lado, sólo dos

  4. Aggregation functions with stronger types of monotonicity

    Czech Academy of Sciences Publication Activity Database

    Klement, E.P.; Manzi, M.; Mesiar, Radko

    Berlin: Springer, 2010 - (Hüllermeier, E.; Kruse, R.; Hoffmann, F.), s. 218-224. (Lecture Notes in Artificial Intelligence . 6178). ISBN 978-3-642-14048-8. ISSN 0302-9743. [IPMU 2010 /13./. Dortmund (DE), 28.06.2010-02.07.2010] Institutional research plan: CEZ:AV0Z10750506 Keywords : ultramodularity * 2-increasingness * k- monotonicity Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2010/E/mesiar-aggregation functions with stronger types of monotonicity.pdf

  5. Development of application program and building database to increase facilities for using the radiation effect assessment computer codes

    International Nuclear Information System (INIS)

    The current radiation effect assessment system is required the skillful technique about the application for various code and high level of special knowledge classified by field. Therefore, as a matter of fact, it is very difficult for the radiation users' who don't have enough special knowledge to assess or recognize the radiation effect properly. For this, we already have developed the five Computer codes(windows-based), that is the radiation effect assessment system, in radiation utilizing field including the nuclear power generation. It needs the computer program that non-specialist can use the five computer codes to have already developed with ease. So, we embodied the A.I-based specialist system that can infer the assessment system by itself, according to the characteristic of given problem. The specialist program can guide users, search data, inquire of administrator directly. Conceptually, with circumstance which user to apply the five computer code may encounter actually, we embodied to consider aspects as follows. First, the accessibility of concept and data to need must be improved. Second, the acquirement of reference theory and use of corresponding computer code must be easy. Third, Q and A function needed for solution of user's question out of consideration previously. Finally, the database must be renewed continuously. Actually, to express this necessity, we develop the client program to organize reference data, to build the access methodology(query) about organized data, to load the visible expression function of searched data. And It is embodied the instruction method(effective theory acquirement procedure and methodology) to acquire the theory referring the five computer codes. It is developed the data structure access program(DBMS) to renew continuously data with ease. For Q and A function, it is embodied the Q and A board within client program because the user of client program can search the content of question and answer. (authors)

  6. LHC Season 2: A stronger machine

    CERN Multimedia

    Dominguez, Daniel

    2015-01-01

    1) New magnets / De nouveaux aimants 2) Stronger connections / Des jonctions électriques renforcées 3) Safer magnets / Des aimants plus sûrs 4) Higher energy beams / Des faisceaux d’énergie plus élevée 5) Narrower beams / Des faisceaux plus serrés 6) Smaller but closer proton packets / Des groupes de protons plus petits mais plus rapprochés 7) Higher voltage / Une tension plus haute 8) Superior cryogenics / Un système cryogénique amélioré 9) Radiation-resistant electronics / Une électronique qui résiste aux radiations 10) More secure vacuum / Un vide plus sûr

  7. States agree on stronger physical protection regime

    International Nuclear Information System (INIS)

    Full text: Delegates from 89 countries agreed on 8 July to fundamental changes that will substantially strengthen the Convention on the Physical Protection of Nuclear Material (CPPNM). IAEA Director General Mohamed ElBaradei welcomed the agreement in saying 'This new and stronger treaty is an important step towards greater nuclear security by combating, preventing, and ultimately punishing those who would engage in nuclear theft, sabotage or even terrorism. It demonstrates that there is indeed a global commitment to remedy weaknesses in our nuclear security regime.' The amended CPPNM makes it legally binding for States Parties to protect nuclear facilities and material in peaceful domestic use, storage as well as transport. It will also provide for expanded cooperation between and among States regarding rapid measures to locate and recover stolen or smuggled nuclear material, mitigate any radiological consequences of sabotage, and prevent and combat related offences. The original CPPNM applied only to nuclear material in international transport. Conference President Dr. Alec Baer said 'All 89 delegations demonstrated real unity of purpose. They put aside some very genuine national concerns in favour of the global interest and the result is a much improved convention that is better suited to addressing the nuclear security challenges we currently face.' The new rules will come into effect once they have been ratified by two-thirds of the 112 States Parties of the Convention, expected to take several years. 'But concrete actions are already taking place around the world. For more than 3 years, the IAEA has been implementing a systematic Nuclear Security plan, including physical protection activities designed to prevent, detect and respond to malicious acts,' said Anita Nillson, Director of the IAEA's Office of Nuclear Security. The Agency's Nuclear Security Fund, set up after the events of 9/11, has delivered $19.5 million in practical assistance to 121 countries

  8. A Human Capital Framework for a Stronger Teacher Workforce. Advancing Teaching--Improving Learning. White Paper

    Science.gov (United States)

    Myung, Jeannie; Martinez, Krissia; Nordstrum, Lee

    2013-01-01

    Building a stronger teacher workforce requires the thoughtful orchestration of multiple processes working together in a human capital system. This white paper presents a framework that can be used to take stock of current efforts to enhance the teacher workforce in school districts or educational organizations, as well as their underlying theories…

  9. THE BUILDING OF THE SPATIAL DATABASE OF THE SATCHINEZ ORNITHOLOGICAL RESERVE AS A PREMISE OF MODERN ECOLOGICAL RESEARCH

    Directory of Open Access Journals (Sweden)

    M. Török-Oance

    2005-01-01

    Full Text Available The creation of a database for the Ornithological Reserve “The Satchinez Marshes” was a necessity for a modern and complex ecological research. This database offers the possibility of a precise localization of the species of plants and animals identified in the field and it is a genuine base for the identification of the main types of habitats and ecosystems in the reserve. With the help of the Terrain Numerical Model, the analysis, at the level of pixels, of the abiotical factors involved in the repartition of ecosystems was made possible, as well as the three-dimensions visualizations of all the results. With the help of the aerophotograms taken in 1963 and 1973, we reconstructed the situation of the reserve at that time, by creating a database of the terrain usage at that time. The same thing was done in 2004, using more diverse sources: cadastral planes, images taken from the satellite, air photos taken in the same year and last, but not least, data collected in the field in 2003-2004. At the same time, this database can be considered a bench-mark (for the year 2004 for the identification of the modifications that occurred during 1963 and 2004, but for future research as well.

  10. Essentially stronger - 1999 EPCOR annual report

    International Nuclear Information System (INIS)

    The year 1999 has been a year of consolidation for EPCOR Utilities, uniting the the former brands of Edmonton Power, Aquaalta and Eltec under a new single brand, EPCOR, to provide Edmontonians with a safe, high quality and reliable essential service at competitive prices . The company is building for growth by augmenting its product line with natural gas and green power, accessing new capital, proceeding with new projects at various sites, creating EPCOR Power Development Corporation with an ambitious mandate to grow beyond the Utilities traditional service areas. In proof of that, EPCOR Water Services won a strategically important contract in Port Hardy, BC; EPCOR Technologies also has been involved in projects beyond Alberta. As a sign of confidence in the company, the City of Edmonton voted to retain ownership of the company in July. The Utility also managed to win national awards for both safety and environmental practices and is the first utility company to have all its generating plants meet ISO 14001 standards. During 2000 the company will tackle the evolution of industry restructuring , will explore more diverse financial structures to accommodate growth and the increase in demand for services to make sure that EPCOR will be a leading provider of electric power and natural gas services as the era of deregulated competitive electrical services in Alberta begins in 2001. This report provides details of the achievements of the company's business units in 1999, accompanied by a consolidated financial statement

  11. Data on publications, structural analyses, and queries used to build and utilize the AlloRep database.

    Science.gov (United States)

    Sousa, Filipa L; Parente, Daniel J; Hessman, Jacob A; Chazelle, Allen; Teichmann, Sarah A; Swint-Kruse, Liskin

    2016-09-01

    The AlloRep database (www.AlloRep.org) (Sousa et al., 2016) [1] compiles extensive sequence, mutagenesis, and structural information for the LacI/GalR family of transcription regulators. Sequence alignments are presented for >3000 proteins in 45 paralog subfamilies and as a subsampled alignment of the whole family. Phenotypic and biochemical data on almost 6000 mutants have been compiled from an exhaustive search of the literature; citations for these data are included herein. These data include information about oligomerization state, stability, DNA binding and allosteric regulation. Protein structural data for 65 proteins are presented as easily-accessible, residue-contact networks. Finally, this article includes example queries to enable the use of the AlloRep database. See the related article, "AlloRep: a repository of sequence, structural and mutagenesis data for the LacI/GalR transcription regulators" (Sousa et al., 2016) [1]. PMID:27508249

  12. Building strategies for tsunami scenarios databases to be used in a tsunami early warning decision support system: an application to western Iberia

    Science.gov (United States)

    Tinti, S.; Armigliato, A.; Pagnoni, G.; Zaniboni, F.

    2012-04-01

    One of the most challenging goals that the geo-scientific community is facing after the catastrophic tsunami occurred on December 2004 in the Indian Ocean is to develop the so-called "next generation" Tsunami Early Warning Systems (TEWS). Indeed, the meaning of "next generation" does not refer to the aim of a TEWS, which obviously remains to detect whether a tsunami has been generated or not by a given source and, in the first case, to send proper warnings and/or alerts in a suitable time to all the countries and communities that can be affected by the tsunami. Instead, "next generation" identifies with the development of a Decision Support System (DSS) that, in general terms, relies on 1) an integrated set of seismic, geodetic and marine sensors whose objective is to detect and characterise the possible tsunamigenic sources and to monitor instrumentally the time and space evolution of the generated tsunami, 2) databases of pre-computed numerical tsunami scenarios to be suitably combined based on the information coming from the sensor environment and to be used to forecast the degree of exposition of different coastal places both in the near- and in the far-field, 3) a proper overall (software) system architecture. The EU-FP7 TRIDEC Project aims at developing such a DSS and has selected two test areas in the Euro-Mediterranean region, namely the western Iberian margin and the eastern Mediterranean (Turkish coasts). In this study, we discuss the strategies that are being adopted in TRIDEC to build the databases of pre-computed tsunami scenarios and we show some applications to the western Iberian margin. In particular, two different databases are being populated, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB). The VSDB contains detailed simulations of few selected earthquake-generated tsunamis. The cases provided by the members of the VSDB are computed "real events"; in other words, they represent the unknowns that the TRIDEC

  13. Building confidence in the disposal safety case through scientific excellence: The OECD Nuclear Energy Agency Thermochemical Database

    International Nuclear Information System (INIS)

    The OECD Nuclear Energy Agency (NEA) Thermochemical Database (TDB) is the product of an ongoing cooperative Project to assemble a comprehensive, internally consistent and quality-assured database of chemical elements selected for their relevance to the assessment of disposal safety. Major selection criteria for the inclusion of elements are mobility, radiotoxicity, inventory and half-life. The project is now in its 20th year, and arose from the realization that existing databases lacked internal consistency or were not sufficiently documented to allow the tracing of the original data sources. This resulted in inconsistent results, e.g., from the same code, when using different databases for the same condition. Thus, increased confidence in the data was needed in order to take advantage, unequivocally, of the powerful insights provided by chemical thermodynamics in performing safety analyses. Confidence in the quality and applicability of the selected data is built upon, in the first place, through the adherence to procedures that are firmly established in the scientific community: formalized and traceable expert judgment, critical review by peers and open publication of both data and the process for their selection with possibility of feedback of experience and new insights. All procedures are specified in the Project guidelines that have remained essentially unchanged since the early stages of the Project. The effort so far has resulted in the publication of thermochemical data for eight elements comprising major actinides and fission and activation products. Added values of the Project are that (a) the thermochemical data are applicable to a variety of disposal systems and also for potential applications beyond disposal; (b) the forming of qualified personnel for the purpose of supporting safety assessment, which is an additional gage of confidence in the latter; (c) an efficient use of resources. For these reasons, the NEA TDB has evolved into a reference tool

  14. IP Geolocation Databases: Unreliable?

    OpenAIRE

    Poese, Ingmar; Uhlig, Steve; Kaafar, Mohamed Ali; Donnet, Benoît; Gueye, Bamba

    2011-01-01

    The most widely used technique for IP geolocation con- sists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolo- cation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitation...

  15. Python结合SQL建立地籍数据库的方法%Method of Cadastral Database Building Based on Python and SQL

    Institute of Scientific and Technical Information of China (English)

    陈秀萍; 郭忠明; 吕翠华

    2013-01-01

    在云南省某建制镇地籍数据库建设中,综合利用多种公共软件平台,配合Python编程、SQL查询语句完成了数据建库工作,在降低生产投入、提高效率的同时,使作业员的综合业务能力得到较大提高.%Taking a town of Yunnan province for example, this article used much public software to build the cadastral database, especially by using Python and SQL. It could reduce costs, increase efficiency, and promote operates' comprehensive abilities.

  16. CoCoTools: open-source software for building connectomes using the CoCoMac anatomical database.

    Science.gov (United States)

    Blumenfeld, Robert S; Bliss, Daniel P; Perez, Fernando; D'Esposito, Mark

    2014-04-01

    Neuroanatomical tracer studies in the nonhuman primate macaque monkey are a valuable resource for cognitive neuroscience research. These data ground theories of cognitive function in anatomy, and with the emergence of graph theoretical analyses in neuroscience, there is high demand for these data to be consolidated into large-scale connection matrices ("macroconnectomes"). Because manual review of the anatomical literature is time consuming and error prone, computational solutions are needed to accomplish this task. Here we describe the "CoCoTools" open-source Python library, which automates collection and integration of macaque connectivity data for visualization and graph theory analysis. CoCoTools both interfaces with the CoCoMac database, which houses a vast amount of annotated tracer results from 100 years (1905-2005) of neuroanatomical research, and implements coordinate-free registration algorithms, which allow studies that use different parcellations of the brain to be translated into a single graph. We show that using CoCoTools to translate all of the data stored in CoCoMac produces graphs with properties consistent with what is known about global brain organization. Moreover, in addition to describing CoCoTools' processing pipeline, we provide worked examples, tutorials, links to on-line documentation, and detailed appendices to aid scientists interested in using CoCoTools to gather and analyze CoCoMac data. PMID:24116839

  17. Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures

    CERN Document Server

    Ammar, Adel; Obligis, Estelle; Crépon, Michel; Thiria, Sylvie

    2016-01-01

    This article deals with an important aspect of the neural network retrieval of sea surface salinity (SSS) from SMOS brightness temperatures (TBs). The neural network retrieval method is an empirical approach that offers the possibility of being independent from any theoretical emissivity model, during the in-flight phase. A Previous study [1] has proven that this approach is applicable to all pixels on ocean, by designing a set of neural networks with different inputs. The present study focuses on the choice of the learning database and demonstrates that a judicious distribution of the geophysical parameters allows to markedly reduce the systematic regional biases of the retrieved SSS, which are due to the high noise on the TBs. An equalization of the distribution of the geophysical parameters, followed by a new technique for boosting the learning process, makes the regional biases almost disappear for latitudes between 40{\\deg}S and 40{\\deg}N, while the global standard deviation remains between 0.6 psu (at t...

  18. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    Science.gov (United States)

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. PMID:7703940

  19. The Grid: Stronger, Bigger, Smarter? Presenting a conceptual framework of power system resilience

    OpenAIRE

    M. Panteli and P. Mancarella

    2015-01-01

    Increasing the resilience of critical power infrastructures to high-impact low-probability events, such as extreme weather phenomena driven by climate change, is of key importance for keeping the lights on. However, what does resilience really mean? Should we build a stronger and bigger grid, or a smarter one? This article discusses a conceptual framework of power system resilience, its key features, and potential enhancement measures.

  20. Axion Cosmology with a Stronger QCD in the Early Universe

    OpenAIRE

    Choi, Kiwoon; Kim, Hang Bae; Kim, Jihn E.

    1996-01-01

    We examine in the context of supersymmetric models whether the usual cosmological upper bound on the axion decay constant can be relaxed by assuming a period of stronger QCD in the early universe. By evaluating the axion potential in the early universe and also taking into account the dilaton potential energy, it is argued that a stronger QCD is not useful for raising up the bound.

  1. Axion cosmology with a stronger QCD in the early universe

    Energy Technology Data Exchange (ETDEWEB)

    Choi Kiwoon [Korea Adv. Inst. of Sci. and Technol., Taejon (Korea, Republic of). Phys. Dept.; Kim, H.B. [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Kim, J.E. [Seoul National Univ. (Korea, Republic of). Dept. of Physics

    1997-04-14

    We examine in the context of supersymmetric models whether the usual cosmological upper bound on the axion decay constant can be relaxed by assuming a period of stronger QCD in the early universe. By evaluating the axion potential in the early universe and also taking into account the dilaton potential energy, it is argued that a stronger QCD is not useful for raising the bound. (orig.).

  2. The Jungle Database Search Engine

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Dyreson, Curtis

    1999-01-01

    Information spread in in databases cannot be found by current search engines. A database search engine is capable to access and advertise database on the WWW. Jungle is a database search engine prototype developed at Aalborg University. Operating through JDBC connections to remote databases, Jungle...... extracts and indexes database data and meta-data, building a data store of database information. This information is used to evaluate and optimize queries in the AQUA query language. AQUA is a natural and intuitive database query language that helps users to search for information without knowing how...

  3. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    , systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...... on desktop research, by Annex 62 participants, namely by the authors. So far the VC database contains 91 buildings, located in Denmark, Ireland and Austria. Further contributions from other countries are expected. The building-datasheets offer illustrative descriptions of buildings of different...

  4. Building a comprehensive mill-level database for the Industrial Sectors Integrated Solutions (ISIS) model of the U.S. pulp and paper sector.

    Science.gov (United States)

    Modak, Nabanita; Spence, Kelley; Sood, Saloni; Rosati, Jacky Ann

    2015-01-01

    Air emissions from the U.S. pulp and paper sector have been federally regulated since 1978; however, regulations are periodically reviewed and revised to improve efficiency and effectiveness of existing emission standards. The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant, sector-based analyses that are performed in conjunction with regulatory development. The model utilizes a multi-sector, multi-product dynamic linear modeling framework that evaluates the economic impact of emission reduction strategies for multiple air pollutants. The ISIS model considers facility-level economic, environmental, and technical parameters, as well as sector-level market data, to estimate the impacts of environmental regulations on the pulp and paper industry. Specifically, the model can be used to estimate U.S. and global market impacts of new or more stringent air regulations, such as impacts on product price, exports and imports, market demands, capital investment, and mill closures. One major challenge to developing a representative model is the need for an extensive amount of data. This article discusses the collection and processing of data for use in the model, as well as the methods used for building the ISIS pulp and paper database that facilitates the required analyses to support the air quality management of the pulp and paper sector. PMID:25806516

  5. Stanford Rock Physics database

    Energy Technology Data Exchange (ETDEWEB)

    Nolen-Hoeksema, R. (Stanford Univ., CA (United States)); Hart, C. (Envision Systems, Inc., Fremont, CA (United States))

    The authors have developed a relational database for the Stanford Rock Physics (SRP) Laboratory. The database is a flexible tool for helping researchers find relevant data. It significantly speeds retrieval of data and facilitates new organizations of rock physics information to get answers to research questions. The motivation for a database was to have a computer data storage, search, and display capability to explore the sensitivity of acoustic velocities to changes in the properties and states of rocks. Benefits include data exchange among researchers, discovery of new relations in existing data, and identification of new areas of research. The authors' goal was to build a database flexible enough for the dynamic and multidisciplinary research environment of rock physics. Databases are based on data models. A flexible data model must: (1) Not impose strong, prior constraints on the data; (2) not require a steep learning curve of the database architecture; and (3) be easy to modify. The authors' choice of the relational data model reflects these considerations. The database and some hardware and software considerations were influenced by their choice of data model, and their desire to provide a user-friendly interface for the database and build a distributed database system.

  6. 基于Google Earth软件建立曲流河地质知识库%Building Geological Knowledge Database Based on Google Earth Software

    Institute of Scientific and Technical Information of China (English)

    石书缘; 胡素云; 冯文杰; 刘伟

    2012-01-01

    在对密井网解剖、露头解剖、现代沉积解剖、沉积模拟实验等建立地质知识库方法优缺点分析基础上,利用Google Earth软件测量了一系列曲流河道的基础数据,结合"将今论古"思想,提出了基于Google Earth软件建立曲流河地质知识库的方法。首先,分析了已有方法的优缺点。其次,介绍了Google Earth软件测量曲流河道的方法步骤,并测量了不同地区不同曲率的河道宽度、点坝长度及弧长,得到了一系列测量数据,建立了测量数据表,并结合已有经验公式与测量数据拟合得到的公式综合分析。结果表明,不同环境中曲流河的河道宽度和点坝长度具不同的相关关系,曲流河的地质知识库不能按统一标准考虑。在不同曲率情况下,河道宽度和点坝长度之间的相关关系亦不同,且随着曲率的减少,其相关关系减弱,反映出点坝的发育程度受河流弯曲程度控制。最后,建议采用建立地质模式库的思想建立定量化曲流河地质知识库,用于对储层建模进行有效约束。%Abstract: The existing methods for establishing geological knowledge database include dense well pattern anatomy, outcrop anatomy, modern sedimentation anatomy, sedimentary simulation experiment, and so on. The advantages and disadvantages of these methods are firstly analyzed in this article. Then the method of building geological knowledge database by using the Google Earth software is proposed combined with the basic geological idea of "the present being a key to the past". Google Earth is a virtual globe, map and geographic information program that were created by Keyhole, Inc, a company acquired by Google in 2004. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. The steps and basic principals for measuring the mean- dering channel by Google Earth are introduced, and a series of fundamental data which consist

  7. Old genes experience stronger translational selection than young genes.

    Science.gov (United States)

    Yin, Hongyan; Ma, Lina; Wang, Guangyu; Li, Mengwei; Zhang, Zhang

    2016-09-15

    Selection on synonymous codon usage for translation efficiency and/or accuracy has been identified as a widespread mechanism in many living organisms. However, it remains unknown whether translational selection associates closely with gene age and acts differentially on genes with different evolutionary ages. To address this issue, here we investigate the strength of translational selection acting on different aged genes in human. Our results show that old genes present stronger translational selection than young genes, demonstrating that translational selection correlates positively with gene age. We further explore the difference of translational selection in duplicates vs. singletons and in housekeeping vs. tissue-specific genes. We find that translational selection acts comparably in old singletons and old duplicates and stronger translational selection in old genes is contributed primarily by housekeeping genes. For young genes, contrastingly, singletons experience stronger translational selection than duplicates, presumably due to redundant function of duplicated genes during their early evolutionary stage. Taken together, our results indicate that translational selection acting on a gene would not be constant during all stages of evolution, associating closely with gene age. PMID:27259662

  8. The right of the stronger: The play Sisyphus and critias

    Directory of Open Access Journals (Sweden)

    Jordović Ivan

    2004-01-01

    Full Text Available The Focus of this study is the standpoint of the play Sisyphus and critias the leader of the thirty towards the right of the stronger. this is a question of constant interest in scientific circles, since its answer can serve as the indicator of the influence this famous theory has had. this interest has been encouraged by the fact that critias’ authorship of the play is questionable. however, the question of the author is not of primary importance for this article, because there are some arguments, among some well known ones, which were not considered and which Show that in this satire, regardless of the author and the purpose of this fragment, the right of the stronger is actually non-existant. the first argument to support this theory is that nomosphysis antithesis is nowhere explicitly mentioned although it is the crucial element of the right of the stronger. in addition there is no claim in the play that the exploitation of the strong by the week or by law accrued. the second argument is that despite the incapability of laws to prevent the secret injustice, they and their importance for the human society are depicted in a positive light. it should also be noted that, unlike callicles and glaucon, laws are created to stop the bad and not the good. the third argument is that the invention of religion is accepted as a positive achievement, which finally enables the overcoming of primeval times and lawlessness. the reflection of this argument is a positive characterization of the individual who invented the fear of gods. the fourth argument, which has not been taken into consideration so far is the way the supporters and opponents of lawlessness are described and marked as κακοί and έσξλοί in the satire only physically strong are considered as strong as opposed to callicles, where they are also spiritually superior. intelectually superior in Sisyphus is the inventor of the fear of gods who is also in favor of law and order. the fact

  9. Stronger misdirection in curved than in straight motion

    Directory of Open Access Journals (Sweden)

    Jorge eOtero-Millan

    2011-11-01

    Full Text Available Illusions developed by magicians are a rich and largely untapped source of insight into perception and cognition. Here we show that curved motion, as employed by the magician in a classic sleight of hand trick, generates stronger misdirection than rectilinear motion, and that this difference can be explained by the differential engagement of the smooth pursuit and the saccadic oculomotor systems. This research moreover exemplifies how the magician’s intuitive understanding of the spectator’s mindset can surpass that of the cognitive scientist in specific instances, and that observation-based behavioral insights developed by magicians are worthy of quantitative investigation in the neuroscience laboratory.

  10. Database replication

    OpenAIRE

    Popov, P. T.; Stankovic, V.

    2014-01-01

    A fault-tolerant node for synchronous heterogeneous database replication and a method for performing a synchronous heterogenous database replication at such a node are provided. A processor executes a computer program to generate a series of database transactions to be carried out at the fault-tolerant node. The fault-tolerant node comprises at least two relational database management systems, each of which are different relational database management system products, each implementing snapsh...

  11. Communicative Databases

    OpenAIRE

    Yu, Kwang-I

    1981-01-01

    A hierarchical organization stores its information in a la rge number of databases. These databases are interrelated , forming a closely-coupled database system. Traditional information systems and current database management systems do not have a means of expressing these relationships. This thesis describes a model of the information structure of the hierarchical organization that identifies the nature of database relationships. It also describes the design and implementatio...

  12. Database Driven Web Systems for Education.

    Science.gov (United States)

    Garrison, Steve; Fenton, Ray

    1999-01-01

    Provides technical information on publishing to the Web. Demonstrates some new applications in database publishing. Discusses the difference between static and database-drive Web pages. Reviews failures and successes of a Web database system. Addresses the question of how to build a database-drive Web site, discussing connectivity software, Web…

  13. Building a stronger framework of nuclear law. The IAEA's legislative assistance services

    International Nuclear Information System (INIS)

    The IAEA is publishing a Handbook on Nuclear Law which will provide IAEA Member States with a new resource for assessing the adequacy of their national legal frameworks governing the peaceful uses of nuclear energy; and practical guidance for governments in efforts to enhance their laws and regulations, in harmonizing them with internationally recognized standards, and in meeting their obligations under relevant international instruments. The Handbook responds to the growing demand from many national governments for assistance in the development of nuclear legislation and the need to harmonize their own legal and institutional arrangements with international standards. It also presents concise and authoritative instructional materials for teaching professionals (lawyers, scientists, engineers, health and radiation protection workers, government administrators) on the basic elements of a sound framework for managing and regulating nuclear energy. The Handbook is organized into five general parts: Part I provides a general overview of key concepts in the field: nuclear energy law and the legislative process; the regulatory authority; and the fundamental regulatory activities of licensing, inspection and enforcement. Part II deals with radiation protection. Part Ill covers various subjects arising from nuclear and radiation safety: radiation sources, nuclear installations, emergency preparedness and response, mining and milling, transportation, and waste and spent fuel. Part IV addresses the topic of nuclear liability and coverage. Part V moves to non-proliferation and security related subjects: safeguards, export and import controls, and physical protection. The Handbook also reflects and refers to the extensive range of IAEA Safety Standards covering all fields relevant to peaceful nuclear technology

  14. A race we can win. The world can - and must - build a stronger security framework

    International Nuclear Information System (INIS)

    Nuclear proliferation and terrorism represent the single most important threat to global security. Yet fundamental differences of opinion remain on how to deal with this ever growing menace to our survival. Should we opt for diplomacy or force? What are the relative merits of collective versus unilateral action? Is it more effective to pursue a policy of containment or one based on inclusiveness? These are not new questions, by any measure. But they have taken on renewed urgency as nations struggle, both regionally and globally, to cope with an extended array of conflicts, highly sophisticated forms of terrorism, and a growing threat of weapons of mass destruction. In a real sense, we are in a race against time - but it's a race we can win if we work together. The Treaty on the Non-Proliferation of Nuclear Weapons (NPT) remains the global anchor for humanity's efforts to curb nuclear proliferation and move towards nuclear disarmament. There is no doubt that the implementation of the NPT continues to provide important security benefits - by providing assurance that, in the great majority of non-nuclear-weapon States, nuclear energy is not being misused for weapon purposes. The NPT is also the only binding agreement in which all five of the nuclear-weapon States have committed themselves to move forward towards nuclear disarmament. Still, it is clear that recent events have placed the NPT and the regime supporting it under unprecedented stress, exposing some of its inherent limitations and pointing to areas that need to be adjusted. The question is how do we best move ahead to achieve the security we seek

  15. Corporate social responsibility: building stronger stakeholder relationships through corporate social responsibility programs

    OpenAIRE

    Andersen, Hanne

    2008-01-01

    The success of a business in today’s market is not only driven by financial values, but also by the business’s behavior1. The triple bottom line (TBL) is an expansion of the traditional reporting framework of accounting, where organizational and environmental issues are included. The TBL was first phrased by John Elkington in 1994. It has later been rephrased as the three P’s, profit, people and planet, which are meant to express the triple bottom line and the aim for sustainab...

  16. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  17. Provincial Land Utilization Database Construction Analysis

    OpenAIRE

    Shiwu XU; Wenfeng LIAO

    2009-01-01

    According to the requirements of provincial land management operation, this paper introduced a method to build municipal land usage database on country level land usage database building. It realizes provincial land usage data updating, transfer and application management. This paper analyzed provincial land usage database construction requirements from data, software and hardware environment, database, etc. It also designed the mathematic base, land usage data feature, data dictionary and me...

  18. Database Technologies for RDF

    Science.gov (United States)

    Das, Souripriya; Srinivasan, Jagannathan

    Efficient and scalable support for RDF/OWL data storage, loading, inferencing and querying, in conjunction with already available support for enterprise level data and operations reliability requirements, can make databases suitable to act as enterprise-level RDF/OWL repository and hence become a viable platform for building semantic applications for the enterprise environments.

  19. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases...

  20. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  1. Building Permits, Buidling permits pulled from apprasial database and geocoded to loctions for all types of permits issued, Published in unknown, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Building Permits dataset, was produced all or in part from Published Reports/Deeds information as of unknown. It is described as 'Buidling permits pulled from...

  2. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  3. Memory Storage Issues of Temporal Database Applications on Relational Database Management Systems

    OpenAIRE

    Sami M. Halawani; Nashwan A.A Romema

    2010-01-01

    Problem statement: Many existing database applications manage time-varying data. These database applications are referred to as temporal databases or time-oriented database applications that are considered as repositories of time-dependent data. Many proposals have been introduced for developing time-oriented database applications, some of which suggest building support for Temporal Database Management Systems (TDBMS) on top of existing non-temporal DBMSs, while others suggest modifying the m...

  4. Mechanisms for stronger warming over drier ecoregions observed since 1979

    Science.gov (United States)

    Zhou, Liming; Chen, Haishan; Hua, Wenjian; Dai, Yongjiu; Wei, Nan

    2016-02-01

    Previous research found that the warming rate observed for the period 1979-2012 increases dramatically with decreasing vegetation greenness over land between 50°S and 50°N, with the strongest warming rate seen over the driest regions such as the Sahara desert and the Arabian Peninsula, suggesting warming amplification over deserts. To further this finding, this paper explores possible mechanisms for this amplification by analyzing observations, reanalysis data and historical simulations of global coupled atmosphere-ocean general circulation models. We examine various variables, related to surface radiative forcing, land surface properties, and surface energy and radiation budget, that control the warming patterns in terms of large-scale ecoregions. Our results indicate that desert amplification is likely attributable primarily to enhanced longwave radiative forcing associated with a stronger water vapor feedback over drier ecoregions in response to the positive global-scale greenhouse gas forcing. This warming amplification and associated downward longwave radiation at the surface are reproduced by historical simulations with anthropogenic and natural forcings, but are absent if only natural forcings are considered, pointing to new potential fingerprints of anthropogenic warming. These results suggest a fundamental pattern of global warming over land that depend on the dryness of ecosystems in mid- and low- latitudes, likely reflecting primarily the first order large-scale thermodynamic component of global warming linked to changes in the water and energy cycles over different ecosystems. This finding may have important implications in interpreting global warming patterns and assessing climate change impacts.

  5. FunctSNP: an R package to link SNPs to functional knowledge and dbAutoMaker: a suite of Perl scripts to build SNP databases

    Directory of Open Access Journals (Sweden)

    Watson-Haigh Nathan S

    2010-06-01

    Full Text Available Abstract Background Whole genome association studies using highly dense single nucleotide polymorphisms (SNPs are a set of methods to identify DNA markers associated with variation in a particular complex trait of interest. One of the main outcomes from these studies is a subset of statistically significant SNPs. Finding the potential biological functions of such SNPs can be an important step towards further use in human and agricultural populations (e.g., for identifying genes related to susceptibility to complex diseases or genes playing key roles in development or performance. The current challenge is that the information holding the clues to SNP functions is distributed across many different databases. Efficient bioinformatics tools are therefore needed to seamlessly integrate up-to-date functional information on SNPs. Many web services have arisen to meet the challenge but most work only within the framework of human medical research. Although we acknowledge the importance of human research, we identify there is a need for SNP annotation tools for other organisms. Description We introduce an R package called FunctSNP, which is the user interface to custom built species-specific databases. The local relational databases contain SNP data together with functional annotations extracted from online resources. FunctSNP provides a unified bioinformatics resource to link SNPs with functional knowledge (e.g., genes, pathways, ontologies. We also introduce dbAutoMaker, a suite of Perl scripts, which can be scheduled to run periodically to automatically create/update the customised SNP databases. We illustrate the use of FunctSNP with a livestock example, but the approach and software tools presented here can be applied also to human and other organisms. Conclusions Finding the potential functional significance of SNPs is important when further using the outcomes from whole genome association studies. FunctSNP is unique in that it is the only R

  6. The IAEA international database on irradiated nuclear graphite properties: A success story for both new build and life extension of commercial power plants

    International Nuclear Information System (INIS)

    Full text: Graphite has enjoyed extensive use as a nuclear moderator (and de facto construction material) since the late 1940s. In commercial plant it has seen the greatest use in gas-cooled reactors of the Magnox and AGR types (UK designs) and the French UNGG reactors. It has also been employed in water-tube reactors such as RBMKs. These reactor types may be thought of as 'Generations I and II', and many (Magnox, UNGG) are approaching the end of their commercial operating lives or are already in decommissioning. However, thoughts have turned in a number of countries to the 'Generation III' high-temperature helium-cooled reactor, employing graphite as a reflector and as part of the fuel matrix, either in the form of prismatic replaceable structures (current Japanese prototype HTTR and US Peach Bottom and Fort St. Vrain and early European experimental 'Dragon' reactor) or in the form of spherical fuel elements (former German design for AVR and THTR, used in Chinese HTR-10 prototype and the basis of South Africa's PBMR design). 'Generation IV', which may include Very High Temperature Plant containing graphite structures, is now also under active consideration. Concern was expressed in the mid 1990's by the authors that the huge body of expertise and the knowledge base on the irradiation behaviour of graphite could be lost to the future designers through both loss of experienced personnel with time and the generally poor quality of record keeping which characterised the mid-to-late twentieth century industry. Graphite undergoes complex changes in its physical, mechanical and chemical properties under irradiation which need to be fully understood in terms of the microstructure of particular graphite formulations and types in order to best specify future materials for the more arduous environments. The issue remains important too for existing reactors where the fluences experienced by the graphite in commercial plant are now moving ahead of databases created using

  7. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  8. Maize databases

    Science.gov (United States)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  9. Negative density dependence is stronger in resource-rich environments and diversifies communities when stronger for common but not rare species.

    Science.gov (United States)

    LaManna, Joseph A; Walton, Maranda L; Turner, Benjamin L; Myers, Jonathan A

    2016-06-01

    Conspecific negative density dependence is thought to maintain diversity by limiting abundances of common species. Yet the extent to which this mechanism can explain patterns of species diversity across environmental gradients is largely unknown. We examined density-dependent recruitment of seedlings and saplings and changes in local species diversity across a soil-resource gradient for 38 woody-plant species in a temperate forest. At both life stages, the strength of negative density dependence increased with resource availability, becoming relatively stronger for rare species during seedling recruitment, but stronger for common species during sapling recruitment. Moreover, negative density dependence appeared to reduce diversity when stronger for rare than common species, but increase diversity when stronger for common species. Our results suggest that negative density dependence is stronger in resource-rich environments and can either decrease or maintain diversity depending on its relative strength among common and rare species. PMID:27111545

  10. An Approach to Building an OWL Ontology Relational Database%OWL本体关系数据库构建方法

    Institute of Scientific and Technical Information of China (English)

    王岁花; 张晓丹; 王越

    2011-01-01

    Along with the increase of ontology types and resources, the structure of ontology is more and more complicated. In order to store various structure types of ontology properly to support efficient ontology queries, this paper puts forward an ontology storage method based on relational databases, which uses a word formation classification method different from the traditional decomposition to store different types of OWL features in two-dimensional tables to solve the complex relationship between the resources and the attribute values, and ensures the integrity of the semantic information. Then, it uses the efficient retrieval and matching speed of relational database management systems, and the SQL language's high degree of deproceduring to retrieve and matching OWL ontology into a relational database, thus compensates for the defect of low efficiency of the query OWL ontology.%随着本体种类和资源的增加,本体的结构越来越复杂,为了合理地存储各种结构类型的本体、支持高效的本体查询,本文提出了一种基于关系数据库的OWL本体存储方法.该方法通过细致考虑OWL的基本元素,采用与传统的本体分解存储模式不同的构词分类方法,将OWL本体中的类、属性、实例、属性特征和属性约束分别存储在一张二维表中,从而解决了资源与属性值之间的复杂关系问题,并保证了OWL本体存储到关系数据库后语义信息的完整性.最后,利用关系数据库管理系统高效的检索和匹配速度以及SQL语言的高度非过程化,将检索和匹配OWL本体转换成检索关系数据库,弥补了OWL本体数据查询效率低的不足之处.

  11. Database theory and SQL practice using Access

    International Nuclear Information System (INIS)

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  12. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  13. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  14. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  15. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  16. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May...

  17. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  18. Ministers at IAEA Conference Call for Stronger Nuclear Security

    International Nuclear Information System (INIS)

    Declaration says. The Declaration recognizes the threat to international security posed by theft and smuggling of nuclear material and affirms the responsibility of States to keep all nuclear material secure. It also encourages all States to join and participate in the IAEA Incident and Trafficking Database, the international repository of information about nuclear and other radioactive material that has fallen out of regulatory control. It invites States that have not yet done so to become party to, and fully implement, the Convention on the Physical Protection of Nuclear Material (CPPNM) and its 2005 Amendment, which broadens the scope of that Convention. Many ministers at the Conference stated that entry into force of the Amendment would make a big difference. Among a number of other issues that are addressed, the Declaration also encourages States to use, on a voluntary basis, the IAEA's nuclear security advisory services and peer reviews such as International Physical Protection Advisory Service (IPPAS) missions, which are based on internationally accepted guidance and tailored to national needs. The Ministers welcomed the IAEA's work in nuclear forensics, and recognized its efforts to raise awareness of the growing threat of cyber-attacks and their potential impact on nuclear security. The work of the Conference will contribute to the IAEA's Nuclear Security Plan for 2014 to 2017. Consultations on the Declaration among IAEA Member States were coordinated by Ambassador Balazs Csuday, Resident Representative of Hungary, and Ambassador Laercio Antonio Vinhas, Resident Representative of Brazil. (IAEA)

  19. The VVV Templates Project. Towards an Automated Classification of VVV Light-Curves. I. Building a database of stellar variability in the near-infrared

    CERN Document Server

    Angeloni, R; Catelan, M; Dékány, I; Gran, F; Alonso-García, J; Hempel, M; Navarrete, C; Andrews, H; Aparicio, A; Beamín, J C; Berger, C; Borissova, J; Peña, C Contreras; Cunial, A; de Grijs, R; Espinoza, N; Eyheramendy, S; Lopes, C E Ferreira; Fiaschi, M; Hajdu, G; Han, J; Hełminiak, K G; Hempel, A; Hidalgo, S L; Ita, Y; Jeon, Y -B; Jordán, A; Kwon, J; Lee, J T; Martín, E L; Masetti, N; Matsunaga, N; Milone, A P; Minniti, D; Morelli, L; Murgas, F; Nagayama, T; Navarro, C; Ochner, P; Pérez, P; Pichara, K; Rojas-Arriagada, A; Roquette, J; Saito, R K; Siviero, A; Sohn, J; Sung, H -I; Tamura, M; Tata, R; Tomasella, L; Townsend, B; Whitelock, P

    2014-01-01

    Context. The Vista Variables in the V\\'ia L\\'actea (VVV) ESO Public Survey is a variability survey of the Milky Way bulge and an adjacent section of the disk carried out from 2010 on ESO Visible and Infrared Survey Telescope for Astronomy (VISTA). VVV will eventually deliver a deep near-IR atlas with photometry and positions in five passbands (ZYJHK_S) and a catalogue of 1-10 million variable point sources - mostly unknown - which require classifications. Aims. The main goal of the VVV Templates Project, that we introduce in this work, is to develop and test the machine-learning algorithms for the automated classification of the VVV light-curves. As VVV is the first massive, multi-epoch survey of stellar variability in the near-infrared, the template light-curves that are required for training the classification algorithms are not available. In the first paper of the series we describe the construction of this comprehensive database of infrared stellar variability. Methods. First we performed a systematic sea...

  20. A Language for Fuzzy Statistical Database

    OpenAIRE

    Katti, C. P; S.Guglani

    2013-01-01

    Fuzzy statistical database is a database used for fuzzy statistical analysis purpose. A fuzzy statistical tableis a tabular representation of fuzzy statistics and is a useful data structure for fuzzy statistical database.Primitive fuzzy statistical tables are a building block of fuzzy statistical table. In this paper we defined thefuzzy statistical join operator in the framework of fuzzy statistical database. The fuzzy statisticaldependency preservation property will be discussed for the fuzz...

  1. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  2. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  3. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  4. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  5. Performance related issues in distributed database systems

    Science.gov (United States)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  6. Metadata queries for complex database systems

    OpenAIRE

    O'Connor, Gerald

    2004-01-01

    Federated Database Management Systems (FDBS) are very complex. Component databases can be heterogeneous, autonomous and distributed, accounting for these different characteristics in building a FDBS is a difficult engineering problem. The Common Data Model (CDM) is what is used to represent the data in the FDBS. It must be semantically rich to correctly represent the data from diverse component databases which differ in structure, datamodel, semantics and content. In this research project we ...

  7. Search Algorithms for Conceptual Graph Databases

    OpenAIRE

    Abdurashid Mamadolimov

    2012-01-01

    We consider a database composed of a set of conceptual graphs. Using conceptual graphs and graph homomorphism it is possible to build a basic query-answering mechanism based on semantic search. Graph homomorphism defines a partial order over conceptual graphs. Since graph homomorphism checking is an NP-Complete problem, the main requirement for database organizing and managing algorithms is to reduce the number of homomorphism checks. Searching is a basic operation for database manipulating p...

  8. Trade Capacity Building Database Data Set

    Data.gov (United States)

    US Agency for International Development — Since 2001, the U.S. Agency for International Development (USAID) has conducted an annual survey on behalf of the Office of the U.S. Trade Representative (USTR) to...

  9. Beyond anti-Muslim sentiment: opposing the Ground Zero mosque as a means to pursuing a stronger America.

    Science.gov (United States)

    Jia, Lile; Karpen, Samuel C; Hirt, Edward R

    2011-10-01

    Americans' opposition toward building an Islamic community center at Ground Zero has been attributed solely to a general anti-Muslim sentiment. We hypothesized that some Americans' negative reaction was also due to their motivation to symbolically pursue a positive U.S. group identity, which had suffered from a concurrent economic and political downturn. Indeed, when participants perceived that the United States was suffering from lowered international status, those who identified strongly with the country, as evidenced especially by a high respect or deference for group symbols, reported a stronger opposition to the "Ground Zero mosque" than participants who identified weakly with the country did. Furthermore, participants who identified strongly with the country also showed a greater preference for buildings that were symbolically congruent than for buildings that were symbolically incongruent with the significance of Ground Zero, and they represented Ground Zero with a larger symbolic size. These findings suggest that identifying group members' underlying motivations provides unusual insights for understanding intergroup conflict. PMID:21903874

  10. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  11. How a Planet with Earth's size can have a Gravitational Field Much Stronger than the Neptune

    OpenAIRE

    De Aquino, Fran

    2015-01-01

    In this paper we show how the gravitational field can be amplified under certain circumstances, and how a planet with Earth's size can have a gravitational field much stronger than the Neptune. A new interpretation for quasars is here formulated.

  12. Database tomography for commercial application

    Science.gov (United States)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  13. Method discussion for quick response grey prediction of stronger aftershocks of an earthquake sequence

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    In this paper, we take occurrence process of early strong aftershocks of a main-after shock type′s earthquake sequence as a complex grey system, and introduce predicting method for its stronger aftershocks by grey predicting theory. Through inspection prediction for 1998 Zhangbei MS=6.2 earthquake sequence, it shows that the grey predicting method maybe has active significance for the investigation of quick response prediction problems of stronger aftershocks of an earthquake sequence.

  14. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  15. Informatics derived materials databases for multifunctional properties

    International Nuclear Information System (INIS)

    In this review, we provide an overview of the development of quantitative structure–property relationships incorporating the impact of data uncertainty from small, limited knowledge data sets from which we rapidly develop new and larger databases. Unlike traditional database development, this informatics based approach is concurrent with the identification and discovery of the key metrics controlling structure–property relationships; and even more importantly we are now in a position to build materials databases based on design ‘intent’ and not just design parameters. This permits for example to establish materials databases that can be used for targeted multifunctional properties and not just one characteristic at a time as is presently done. This review provides a summary of the computational logic of building such virtual databases and gives some examples in the field of complex inorganic solids for scintillator applications. (review)

  16. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  17. The NAGRA/PSI thermochemical database: new developments

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, W.; Berner, U.; Thoenen, T. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Pearson, F.J.Jr. [Ground-Water Geochemistry, New Bern, NC (United States)

    2000-07-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  18. The NAGRA/PSI thermochemical database: new developments

    International Nuclear Information System (INIS)

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  19. Cloud Database Database as a Service

    Directory of Open Access Journals (Sweden)

    Waleed Al Shehri

    2013-05-01

    Full Text Available Cloud computing has been the most adoptable technology in the recent times, and the database has alsomoved to cloud computing now, so we will look intothe details of database as a service and its functioning.This paper includes all the basic information aboutthe database as a service. The working of databaseas aservice and the challenges it is facing are discussed with an appropriate. The structure of database incloud computing and its working in collaboration with nodes is observed under database as a service. Thispaper also will highlight the important things to note down before adopting a database as a serviceprovides that is best amongst the other. The advantages and disadvantages of database as a service will letyou to decide either to use database as a service or not. Database as a service has already been adopted bymany e-commerce companies and those companies are getting benefits from this service.

  20. Beyond Bradley and Behrendt: Building a stronger evidence-base about Indigenous pathways and transitions into higher education

    OpenAIRE

    Jack Frawley; Smith, James A.; Steve Larkin

    2015-01-01

    Successive Australian governments have addressed the issue of social inclusion and equity in higher education in a number of policies and reviews, the most recent being the Review of Australian Higher Education, the Bradley Review (Bradley et al. 2008); and the Review of Higher Education Access and Outcomes for Aboriginal and Torres Strait Islander People, the Behrendt Review (Behrendt et al. 2012). The Bradley Review noted that although there had been success in areas of gender inequity ...

  1. Strategies of Building a Stronger Sense of Community for Sustainable Neighborhoods: Comparing Neighborhood Accessibility with Community Empowerment Programs

    OpenAIRE

    Te-I Albert Tsai

    2014-01-01

    New Urbanist development in the U.S. aims at enhancing a sense of community and seeks to return to the design of early transitional neighborhoods which have pedestrian-oriented environments with retail shops and services within walking distances of housing. Meanwhile, 6000 of Taiwan’s community associations have been running community empowerment programs supported by the Council for Cultural Affairs that have helped many neighborhoods to rebuild so-called community cohesion. This research ...

  2. Beyond Bradley and Behrendt: Building a stronger evidence-base about Indigenous pathways and transitions into higher education

    Directory of Open Access Journals (Sweden)

    Jack Frawley

    2015-10-01

    Full Text Available Successive Australian governments have addressed the issue of social inclusion and equity in higher education in a number of policies and reviews, the most recent being the Review of Australian Higher Education, the Bradley Review (Bradley et al. 2008; and the Review of Higher Education Access and Outcomes for Aboriginal and Torres Strait Islander People, the Behrendt Review (Behrendt et al. 2012. The Bradley Review noted that although there had been success in areas of gender inequity in higher education, students from regional and remote areas, Indigenous students and those from low SES backgrounds were still seriously under-represented. The Bradley Review also found that the major barriers to the participation of students from low SES backgrounds were educational attainment, lower awareness of the long term benefits of higher education, less aspiration to participate, and the potential need for extra financial, academic or personal support once enrolled. As a result of the Bradley Review the Australian Government’s policy Transforming Australia’s Higher Education System announced two targets for the higher education sector: that by 2020, 20% of undergraduate university students should be from low socio-economic backgrounds; and, that by 2025, 40% of 25-34 year olds should hold a bachelor degree. To support this policy, the Higher Education Participation and Partnerships Program (now rebadged Higher Education Participation Program (HEPP initiative came into being, with the participation component offering universities financial incentives to enroll and retain students from low SES backgrounds; and the partnerships component providing funding to raise student aspirations for higher education and working in partnership with other education institutions to do this (Gale & Parker 2013.

  3. The SIMBAD astronomical database

    CERN Document Server

    Wenger, M; Egret, D; Dubois, P; Bonnarel, F; Borde, S; Genova, F; Jasniewicz, G; Laloe, S; Lesteven, S; Monier, R; Wenger, Marc; Ochsenbein, Francois; Egret, Daniel; Dubois, Pascal; Bonnarel, Francois; Borde, Suzanne; Genova, Francoise; Jasniewicz, Gerard; Laloe, Suzanne; Lesteven, Soizick; Monier, Richard

    2000-01-01

    Simbad is the reference database for identification and bibliography ofastronomical objects. It contains identifications, `basic data', bibliography,and selected observational measurements for several million astronomicalobjects. Simbad is developed and maintained by CDS, Strasbourg. Building thedatabase contents is achieved with the help of several contributing institutes.Scanning the bibliography is the result of the collaboration of CDS withbibliographers in Observatoire de Paris (DASGAL), Institut d'Astrophysique deParis, and Observatoire de Bordeaux. When selecting catalogues and tables forinclusion, priority is given to optimal multi-wavelength coverage of thedatabase, and to support of research developments linked to large projects. Inparallel, the systematic scanning of the bibliography reflects the diversityand general trends of astronomical research. A WWW interface to Simbad is available at: http://simbad.u-strasbg.fr/Simbad

  4. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    Science.gov (United States)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  5. NIST Databases on Atomic Spectra

    Science.gov (United States)

    Reader, J.; Wiese, W. L.; Martin, W. C.; Musgrove, A.; Fuhr, J. R.

    2002-11-01

    The NIST atomic and molecular spectroscopic databases now available on the World Wide Web through the NIST Physics Laboratory homepage include Atomic Spectra Database, Ground Levels and Ionization Energies for the Neutral Atoms, Spectrum of Platinum Lamp for Ultraviolet Spectrograph Calibration, Bibliographic Database on Atomic Transition Probabilities, Bibliographic Database on Atomic Spectral Line Broadening, and Electron-Impact Ionization Cross Section Database. The Atomic Spectra Database (ASD) [1] offers evaluated data on energy levels, wavelengths, and transition probabilities for atoms and atomic ions. Data are given for some 950 spectra and 70,000 energy levels. About 91,000 spectral lines are included, with transition probabilities for about half of these. Additional data resulting from our ongoing critical compilations will be included in successive new versions of ASD. We plan to include, for example, our recently published data for some 16,000 transitions covering most ions of the iron-group elements, as well as Cu, Kr, and Mo [2]. Our compilations benefit greatly from experimental and theoretical atomic-data research being carried out in the NIST Atomic Physics Division. A new compilation covering spectra of the rare gases in all stages of ionization, for example, revealed a need for improved data in the infrared. We have thus measured these needed data with our high-resolution Fourier transform spectrometer [3]. An upcoming new database will give wavelengths and intensities for the stronger lines of all neutral and singly-ionized atoms, along with energy levels and transition probabilities for the persistent lines [4]. A critical compilation of the transition probabilities of Ba I and Ba II [5] has been completed and several other compilations of atomic transition probabilities are nearing completion. These include data for all spectra of Na, Mg, Al, and Si [6]. Newly compiled data for selected ions of Ne, Mg, Si and S, will form the basis for a new

  6. The Life Support Database system

    Science.gov (United States)

    Likens, William C.

    1991-01-01

    The design and implementation of the database system are described with specific reference to data available from the Build-1 version and techniques for its utilization. The review of the initial documents for the Life Support Database is described in terms of title format and sequencing, and the users are defined as participants in NASA-sponsored life-support research. The software and hardware selections are based respectively on referential integrity and compatibility, and the implementation of the user interface is achieved by means of an applications-programming tool. The current Beta-Test implementation of the system includes several thousand acronyms and bibliographic references as well as chemical properties and exposure limits, equipment, construction materials, and mission data. In spite of modifications in the database the system is found to be effective and a potentially significant resource for the aerospace community.

  7. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  8. Unit 66 - Database Creation

    OpenAIRE

    Unit 61, CC in GIS; National Center for Geographic Information and Analysis (UC Santa Barbara, SUNY at Buffalo, University of Maine)

    1990-01-01

    This unit examines the planning and management issues involved in the physical creation of the database. It describes some issues in database creation, key hardware parameters of the system, partitioning the database for tiles and layers and converting data for the database. It illustrates these through an example from the Flathead National Forest in northwestern Montana, where a resource management database was required.

  9. Visual Attention Modelling for Subjective Image Quality Databases

    OpenAIRE

    Engelke, Ulrich; Maeder, Anthony; Zepernick, Hans-Jürgen

    2009-01-01

    The modelling of perceptual image quality metrics has experienced increased effort in recent years. In order to allow for model design, validation, and comparison, a number of subjective image quality databases has been made available to the research community. Most metrics that were designed using these databases assess the quality uniformly over the whole image, not taking into account stronger attention to salient regions of an image. In order to facilitate incorporation of visual attentio...

  10. Peptide-MHC class I stability is a stronger predictor of CTL immunogenicity than peptide affinity

    DEFF Research Database (Denmark)

    Harndahl, Mikkel Nors; Rasmussen, Michael; Nielsen, Morten;

    2012-01-01

    Peptide-MHC class I stability is a stronger predictor of CTL immunogenicity than peptide affinity Mikkel Harndahla, Michael Rasmussena, Morten Nielsenb, Soren Buusa,∗ a Laboratory of Experimental Immunology, Faculty of Health Sciences, University of Copenhagen, Denmark b Center for Biological Seq...... al., 2007. J. Immunol. 178, 7890–7901. doi:10.1016/j.molimm.2012.02.025...

  11. A stronger patch test elicitation reaction to the allergen hydroxycitronellal plus the irritant sodium lauryl sulfate

    DEFF Research Database (Denmark)

    Heydorn, S; Andersen, Klaus Ejner; Johansen, Jeanne Duus;

    2003-01-01

    Household and cleaning products often contain both allergens and irritants. The aim of this double-blinded, randomized, paired study was to determine whether patch testing with an allergen (hydroxycitronellal) combined with an irritant [sodium lauryl sulfate (SLS)] cause a stronger patch test...

  12. World Religion Database

    OpenAIRE

    Dekker, Jennifer

    2009-01-01

    This article reviews the new database released by Brill entitled World Religion Database (WRD). It compares WRD to other religious demography tools available and rates the database on a 5 point scale.

  13. Main-memory database VS Traditional database

    OpenAIRE

    Rehn, Marcus; Sunesson, Emil

    2013-01-01

    There has been a surge of new databases in recent years. Applications today create a higher demand on database performance than ever before. Main-memory databases have come into the market quite recently and they are just now catching a lot of interest from many different directions. Main-memory databases are a type of database that stores all of its data in the primary memory. They provide a big increase in performance to a lot of different applications. This work evaluates the difference in...

  14. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  15. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  16. Memory Storage Issues of Temporal Database Applications on Relational Database Management Systems

    Directory of Open Access Journals (Sweden)

    Sami M. Halawani

    2010-01-01

    Full Text Available Problem statement: Many existing database applications manage time-varying data. These database applications are referred to as temporal databases or time-oriented database applications that are considered as repositories of time-dependent data. Many proposals have been introduced for developing time-oriented database applications, some of which suggest building support for Temporal Database Management Systems (TDBMS on top of existing non-temporal DBMSs, while others suggest modifying the models of exiting DBMSs or building TDBMS from scratch. Approach: This study addressed several issues concerning developing a technique that enables database designers to understand the way in which time-varying behavior can be modeled and mapped into tabular form. Results: Conventional DBMSs do not have the capability to record and process time-varying aspects of the real world. With growing sophistication of DBMS applications, the lack of temporal support in conventional DBMS raises serious problems when used to develop temporal database. The technique of understanding how to think about time and represent it in formal systems is the topic of this study. We examined how to implement time-varying application in the SQL structured query language by introducing temporal data concepts that need to be simulated in DBMSs which lack temporal supports. We proposed a temporal data model that combines the features of previous temporal models and that reduces the cost of memory storage. Conclusion: We proposed a technique for implementing temporal database on top of exiting non-temporal DBMS. This technique includes five main areas. These areas are temporal database conceptual design, temporal database logical design, integrity constraints preventions in temporal database, modifying and querying temporal database. We proposed a data model for the temporal database based on the data models which are discussed in literature.

  17. DATABASES FOR RECOGNITION OF HANDWRITTEN ARABIC CHEQUES

    OpenAIRE

    Alohali, Y.; Cheriet, M.; Suen, C.Y.

    2004-01-01

    This paper describes an effort toward building Arabic cheque databases for research in recognition of handwritten Arabic cheques. Databases of Arabic legal amounts, Arabic sub­ words, courtesy amounts, Indian digits, and Arabic cheques are provided. This paper highlights the characteristics of the Arabic language and presents the various steps that have been completed to achieve this goal including segmentation, binarization, tagging and validation.

  18. Case Studies on Sustainable Buildings

    OpenAIRE

    Hui, Sam CM

    2005-01-01

    This web site is developed with the aim to promote sustainable design and planning of buildings. A knowledge base of case studies and resources has been established to illustrate the sustainable design strategies and features in realistic building projects all over the world. The database of case studies can be searched by project names, locations, design strategies and design features.

  19. Diluting the inflationary axion fluctuation by a stronger QCD in the early Universe

    Directory of Open Access Journals (Sweden)

    Kiwoon Choi

    2015-11-01

    Full Text Available We propose a new mechanism to suppress the axion isocurvature perturbation, while producing the right amount of axion dark matter, within the framework of supersymmetric axion models with the axion scale induced by supersymmetry breaking. The mechanism involves an intermediate phase transition to generate the Higgs μ-parameter, before which the weak scale is comparable to the axion scale and the resulting stronger QCD yields an axion mass heavier than the Hubble scale over a certain period. Combined with that the Hubble-induced axion scale during the primordial inflation is well above the intermediate axion scale at present, the stronger QCD in the early Universe suppresses the axion fluctuation to be small enough even when the inflationary Hubble scale saturates the current upper bound, while generating an axion misalignment angle of order unity.

  20. Stronger constraints on axion from measuring the Casimir interaction by means of dynamic atomic force microscope

    CERN Document Server

    Bezerra, V B; Mostepanenko, V M; Romero, C

    2014-01-01

    We calculate the additional force due to two-axion exchange acting in a sphere-disc geometry, used in experiments on measuring the gradient of the Casimir force. With this result, stronger constraints on the pseudoscalar coupling constants of an axion and axion-like particles to a proton and a neutron are obtained over the wide range of axion masses from 0.03mV to 1eV. Among the three experiments with Au-Au, Au-Ni and Ni-Ni boundary surfaces performed by means of dynamic atomic force microscope, major improving is achieved for the experiment with Au-Au test bodies. Here, the constraints obtained are stronger up to a factor of 170, as compared to the previously known ones. The largest strengthening holds for the axion mass 0.3eV.

  1. A stronger entanglement monogamy inequality in a 2x2x3 system

    International Nuclear Information System (INIS)

    In this paper, we prove a stronger entanglement monogamy inequality in a 2x2x3 system's pure state |Ψ)ABC. Specifically, we show that the linear entropy of ρA, which is the entanglement between A and BC, is always larger than the sum of the square of concurrence between A and B and the square of concurrence of assistance between A and C. Our proof is based on direct generalizations of the qubit system's results. Our inequality is stronger than the known monogamy inequality of concurrence and shows that the entanglement of assistance always comes from the existing entanglement. However, our inequality also shows that unlike the three-qubit case, in higher dimensional systems the entanglement between A and BC cannot be completely transformed into bipartite entanglement with assistance. Through our proof, we also give some cases when the inequality reduces to an equality.

  2. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum

    OpenAIRE

    Howard, Philip N.; Kollanyi, Bence

    2016-01-01

    Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeti...

  3. KALIMER database development

    International Nuclear Information System (INIS)

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  4. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL

    2010-08-01

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regarding Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.

  5. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    International Nuclear Information System (INIS)

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regarding Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.

  6. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  7. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  8. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  9. 基于感知质量的科技文献数据库网站信息用户满意模型研究%Building Satisfaction Model for Information Users Based on Perceived Quality of Academic Database Websites

    Institute of Scientific and Technical Information of China (English)

    李莉; 甘利人; 谢兆霞

    2009-01-01

    In the network era, how to examine the effectiveness and acceptability of website at the point of user has been concerned by industry and theoretical circles. But as academic database website which provides information products and services, the research on information user satisfaction for academic database website is only developing at the beginning. This paper explores the perceived quality of information user, identifies the key dimensions of information products and services quality provided by academic database website, and develops the assessment model of information user satisfaction. Then following several hypotheses proposed based on the framework, more attention is given to the building structural equation models by using Partial Least Square Method and hypotheses testing on the basis of the samples. As a result, the key dimensions of perceived quality and other factors are examined. Information user satisfaction model examined can be widely used in academic database website, with a view to academic database website to provide guidance in the management decisions.%在网络环境下,如何从用户视角研究网站使用效果和可接受程度,已经成为业界和理论界普遍关注的问题,但作为提供信息产品和服务的科技文献数据库网站,其用户满意研究的相关理论与实践还处于起步阶段.本文对信息用户的感知质量进行了探索性研究,根据样本调查、专家访谈结果识别了科技文献数据库网站所提供的信息产品和信息服务的质量

  10. Routing Protocols for Transmitting Large Databases or Multi-databases Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Most knowledgeable people agree that networking and routingtechnologi es have been around about 25 years. Routing is simultaneously the most complicat ed function of a network and the most important. It is of the same kind that mor e than 70% of computer application fields are MIS applications. So the challenge in building and using a MIS in the network is developing the means to find, acc ess, and communicate large databases or multi-databases systems. Because genera l databases are not time continuous, in fact, they can not be streaming, so we ca n't obtain reliable and secure quality of service by deleting some unimportant d atagrams in the databases transmission. In this article, we will discuss which k ind of routing protocol is the best type for large databases or multi-databases systems transmission in the networks.

  11. Big Data Analytics of City Wide Building Energy Declarations

    OpenAIRE

    Ma, Yixiao

    2015-01-01

    This thesis explores the building energy performance of the domestic sector in the city of Stockholm based on the building energy declaration database. The aims of this master thesis are to analyze the big data sets of around 20,000 buildings in Stockholm region, explore the correlation between building energy performance and different internal and external affecting factors on building energy consumption, such as building energy systems, building vintages and etc. By using clustering method,...

  12. Scaling up ATLAS Database Release Technology for the LHC Long Run

    Science.gov (United States)

    Borodin, M.; Nevski, P.; Vaniachine, A.; ATLAS Collaboration

    2011-12-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the "live" Oracle server. Database Release technology fully satisfies the requirements of ALLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  13. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  14. Extracting Schema from an OEM Database

    Institute of Scientific and Technical Information of China (English)

    沈一栋

    1998-01-01

    While the schema-less feature of the OEM(Object Exchange Modl)gives flexibility in representing semi-structured data,it brings difficulty in formulating database queries. Extracting schema from an OEM database then becomes an important research topic.This paper presents a new approach to this topic with th following reatures.(1)In addition to representing th nested label structure of an OEM database,the proposed OEM schema keeps up-tp-date information about instance objects of the database,The object-level information is useful in speeding up query evaluation.(2)The OEM schema is explicitly represented as a label-set,which is easy to construct and update.(3)The OEM schema of a database is statically built and dynamically updated.The time complexity of building the OEM schems is linear in the size of the OEM database.(4)The approach is applicable to a wide range of areas where the underlying schema is much smaller than the database itself(e.g.data warehouses that are made from a set of heterogeneous databases).

  15. Directory of IAEA databases

    International Nuclear Information System (INIS)

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  16. Assessment Database (ADB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Assessment Database (ADB) is a relational database application for tracking water quality assessment data, including use attainment, and causes and sources of...

  17. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  18. AIDSinfo Drug Database

    Science.gov (United States)

    ... Widgets Order Publications Skip Nav AIDS info Drug Database Home > Drugs Español small medium large Text Size ... health care providers and patients. Search the Drug Database Help × Search by drug name Performs a search ...

  19. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  20. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Database Description General information of database Database name Trypanosomes Database...rmation and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonomy Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Na...me: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database is a database providing th

  1. Aviation Safety Issues Database

    Science.gov (United States)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  2. Web database development

    OpenAIRE

    Tsardas, Nikolaos A.

    2001-01-01

    This thesis explores the concept of Web Database Development using Active Server Pages (ASP) and Java Server Pages (JSP). These are among the leading technologies in the web database development. The focus of this thesis was to analyze and compare the ASP and JSP technologies, exposing their capabilities, limitations, and differences between them. Specifically, issues related to back-end connectivity using Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC), application ar...

  3. Refactoring of a Database

    OpenAIRE

    Dsousa, Ayeesha; Bhatia, Shalini

    2009-01-01

    The technique of database refactoring is all about applying disciplined and controlled techniques to change an existing database schema. The problem is to successfully create a Database Refactoring Framework for databases. This paper concentrates on the feasibility of adapting this concept to work as a generic template. To retain the constraints regardless of the modifications to the metadata, the paper proposes a MetaData Manipulation Tool to facilitate change. The tool adopts a Template Des...

  4. Scopus database: a review

    OpenAIRE

    Burnham, Judy F.

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  5. Future database machine architectures

    OpenAIRE

    Hsiao, David K.

    1984-01-01

    There are many software database management systems available on many general-purpose computers ranging from micros to super-mainframes. Database machines as backened computers can offload the database management work from the mainframe so that we can retain the same mainframe longer. However, the database backend must also demonstrate lower cost, higher performance, and newer functionality. Some of the fundamental architecture issues in the design of high-performance and great-capacity datab...

  6. Library-Generated Databases

    OpenAIRE

    Brattli, Tore

    1999-01-01

    The development of the Internet and the World Wide Web has given libraries many new opportunities to disseminate organized information about internal and external collections to users. One of these possibilities is to make separate databases for information or services not sufficiently covered by the online public access catalog (OPAC) or other available databases. What’s new is that librarians can now create and maintain these databases and make them user-friendly. Library-generated database...

  7. Plant and animal communities along the Swedish Baltic Sea coast - the building of a database of quantitative data collected by SCUBA divers, its use and some GIS applications in the Graesoe area

    International Nuclear Information System (INIS)

    The aim of the project was to compile a single database with quantitative data collected by SCUBA divers from the whole Swedish Baltic Sea coast. Data of plant and animal biomass, together with position, depth and type of substrate from 19 areas along the Swedish coast from the county of Blekinge to Kalix in the Bothnian Bay were compiled in a single database. In all, the database contains 2,170 records (samples) from 179 different stations where in total 161 plant and 145 animal species have been found. The data were then illustrated by the geographical distribution of plant and animal biomass and by constructing a model to estimate future changes of the plant and animal communities in the Graesoe area in the Aaland Sea applying GIS-techniques. To illustrate the opportunities of the database the change of the composition of benthic plant and animal biomass with salinity was calculated. The proportion of marine species increased with increasing salinity and the benthic biomass was at its highest in the southern Baltic proper. Quantitative data from Grepen and the Graesoe-Singoe area were used to calculate present biomass in the Graesoe area. A scenario of the change in biomass distribution and total biomass caused by shore displacement was created using data from Raaneaa and Kalix in the Bothnian Bay. To map the biomass distribution the material was divided into different depth intervals. The change of biomass with time was calculated as a function of salinity change and reduction of the available area, caused by shore displacement. The total biomass for all plants and animals in the investigated area was 50,500 tonnes at present. In 2,000 years the total biomass will be 25,000 tonnes and in 4,000 years 3,600 tonnes due to shore displacement causing a decrease in both salinity and available substrate.To make an estimate of the species distribution and a rough estimate of their biomass in an unknown geographic area, the type of substrate, the depth and the wave

  8. Plant and animal communities along the Swedish Baltic Sea coast - the building of a database of quantitative data collected by SCUBA divers, its use and some GIS applications in the Graesoe area

    Energy Technology Data Exchange (ETDEWEB)

    Sandman, Antonia; Kautsky, Hans [Stockholm Univ. (Sweden). Dept. of Systems Ecology

    2005-03-01

    The aim of the project was to compile a single database with quantitative data collected by SCUBA divers from the whole Swedish Baltic Sea coast. Data of plant and animal biomass, together with position, depth and type of substrate from 19 areas along the Swedish coast from the county of Blekinge to Kalix in the Bothnian Bay were compiled in a single database. In all, the database contains 2,170 records (samples) from 179 different stations where in total 161 plant and 145 animal species have been found. The data were then illustrated by the geographical distribution of plant and animal biomass and by constructing a model to estimate future changes of the plant and animal communities in the Graesoe area in the Aaland Sea applying GIS-techniques. To illustrate the opportunities of the database the change of the composition of benthic plant and animal biomass with salinity was calculated. The proportion of marine species increased with increasing salinity and the benthic biomass was at its highest in the southern Baltic proper. Quantitative data from Grepen and the Graesoe-Singoe area were used to calculate present biomass in the Graesoe area. A scenario of the change in biomass distribution and total biomass caused by shore displacement was created using data from Raaneaa and Kalix in the Bothnian Bay. To map the biomass distribution the material was divided into different depth intervals. The change of biomass with time was calculated as a function of salinity change and reduction of the available area, caused by shore displacement. The total biomass for all plants and animals in the investigated area was 50,500 tonnes at present. In 2,000 years the total biomass will be 25,000 tonnes and in 4,000 years 3,600 tonnes due to shore displacement causing a decrease in both salinity and available substrate.To make an estimate of the species distribution and a rough estimate of their biomass in an unknown geographic area, the type of substrate, the depth and the wave

  9. Automated Oracle database testing

    CERN Document Server

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  10. Mission and Assets Database

    Science.gov (United States)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  11. CTD_DATABASE - Cascadia tsunami deposit database

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have...

  12. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  13. Nuclear power economic database

    International Nuclear Information System (INIS)

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  14. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  15. The Influence of Building Distributed Archival Database Sys-tem on Archival Com-pilation%分布式档案数据库系统的建立及其对档案编研的影响

    Institute of Scientific and Technical Information of China (English)

    郑慧; 覃筱媚

    2014-01-01

    At the beginning of twenty- first Century, archival compilation under the network be-came one of the hot issues.How-ever, people paid little attention on the distributed archival data-base system applying on the ar-chive compilation. By setting the audience module, archival original database module, the experts module, editing module, the dis-tributed archival database system can have a positive impact on the archival compilation work, archival compilation products dissemina-tion and the open range and it will play an important role on the fu-ture of the archival compilation.%21世纪初,网络环境下的档案编研就成为人们关注的热点问题之一。然而,人们对将分布式档案数据库系统应用于档案编研缺乏足够的关注。分布式档案数据库系统通过分别设置受众模块、档案原文数据库模块、专家模块、编研模块,对档案编研工作环节、档案编研成果传播、档案编研成果开放范围均产生一定积极影响,将对未来的档案编研工作起到重要作用。

  16. Generic Proxies - Supporting Data Integration Inside the Database

    OpenAIRE

    Vancea, Andrei; Grossniklaus, Michael; Norrie, Moira C.

    2007-01-01

    Existing approaches to data integration generally propose building a layer on top of database systems to perform the necessary data transformations and manage data consistency. We show how support for the integration of heterogeneous data sources can instead be built into a database system through the introduction of a generic proxy concept.

  17. Analysis of string-searching algorithms on biological sequence databases

    OpenAIRE

    Sheik, SS; Aggarwal, Sumit K; Poddar, A; Sathiyabhama, B; Balakrishna, N; Sekar, K

    2005-01-01

    String-searching algorithms are used to find the occurrences of a search string in a given text. The advent of digital computers has stimulated the development of string-searching algorithms for various applications. Here, we report the performance of all string-searching algorithms on widely used biological sequence databases containing the building blocks of nucleotides (in the case of nucleic acid sequence database) and amino acids (in the case of protein sequence database). The biological...

  18. Representations built from a true geographic database

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    The development of a system for geovisualisation under the Centre for 3D GeoInformation at Aalborg University, Denmark, has exposed the need for a rethinking of the representation of virtual environments. Now that almost everything is possible (due to technological advances in computer graphics...... a representation based on geographic and geospatial principles. The system GRIFINOR, developed at 3DGI, Aalborg University, DK, is capable of creating this object-orientation and furthermore does this on top of a true Geographic database. A true Geographic database can be characterized as a database...... that can cover the whole world in 3d and with a spatial reference given by geographic coordinates. Built on top of this is a customised viewer, based on the Xith(Java) scenegraph. The viewer reads the objects directly from the database and solves the question about Level-Of-Detail on buildings...

  19. Organizing a breast cancer database: data management.

    Science.gov (United States)

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application. PMID:27197511

  20. Which is a stronger indicator of dental caries: oral hygiene, food, or beverage? A clinical study.

    Science.gov (United States)

    Jain, Poonam; Gary, Julie J

    2014-01-01

    Dental caries is a multifactorial disease with various risk factors. Oral hygiene and dietary factors--specifically, the consumption of snacks and beverages with added sugars--have been shown to be risk indicators for this disease. It is critical for dental professionals to understand the relative roles of each of these food categories in the dental caries process. This article presents a cross-sectional study of 76 people living in a Southern Illinois fluoridated community. The amount of sugar-sweetened beverages, snack food consumption, plaque index, and age showed statistically significant relationships with the outcome variable--dental caries (P < 0.05). The results indicated that dietary factors and oral hygiene both contribute equally to dental caries in young adults living in a fluoridated community. Sugar-sweetened beverage consumption was a much stronger indicator of dental caries than snack food consumption in our study population. PMID:24784517

  1. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum

    CERN Document Server

    Howard, Philip N

    2016-01-01

    Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.

  2. Database for foundry engineers – simulationDB – a modern database storing simulation results

    Directory of Open Access Journals (Sweden)

    P. Malinowski

    2010-11-01

    Full Text Available Purpose: of this paper The main aim of this paper is to build specific database system for collecting, analysing and searching simulation results.Design/methodology/approach: It was prepared using client-server architecture. Then was prepared GUI - Graphical User Interface.Findings: New database system for foundry was discovered.Practical implications: System development is in progress and practical implication will be hold in one of iron foundry in next year.Originality/value: The original value of this paper is innovative database system for storing and analysing simulation results.

  3. Stronger pharmacological cortisol suppression and anticipatory cortisol stress response in transient global amnesia

    Directory of Open Access Journals (Sweden)

    Martin eGriebe

    2015-03-01

    Full Text Available Transient global amnesia (TGA is a disorder characterized by a sudden attack of severe anterograde memory disturbance that is frequently preceded by emotional or physical stress and resolves within 24 hours. By using MRI following the acute episode in TGA patients, small lesions in the hippocampus have been observed. Hence it has been hypothesized that the disorder is caused by a stress-related transient inhibition of memory formation in the hippocampus. To study the factors that may link stress and TGA, we measured the cortisol day-profile, the dexamethasone feedback inhibition and the effect of experimental exposure to stress on cortisol levels (using the socially evaluated cold pressor test and a control procedure in 20 patients with a recent history of TGA and in 20 healthy controls. We used self-report scales of depression, anxiety and stress and a detailed neuropsychological assessment to characterize our collective. We did not observe differences in mean cortisol levels in the cortisol day-profile between the two groups. After administration of low-dose dexamethasone, TGA patients showed significantly stronger cortisol suppression in the daytime profile compared to the control group (p = 0.027. The mean salivary cortisol level was significantly higher in the TGA group prior to and after the experimental stress exposure (p = 0.008; p = 0.010 respectively, as well as prior to and after the control condition (p = 0.022; p= 0.024 respectively. The TGA group had higher scores of depressive symptomatology (p = 0.021 and anxiety (p = 0.007, but the groups did not differ in the neuropsychological assessment. Our findings of a stronger pharmacological suppression and higher cortisol levels in anticipation of experimental stress in participants with a previous TGA indicate a hypersensitivity of the HPA axis. This suggests that an individual stress sensitivity might play a role in the pathophysiology of TGA.

  4. Bone mineral content has stronger association with lean mass than fat mass among Indian urban adolescents

    Directory of Open Access Journals (Sweden)

    Raman K Marwaha

    2015-01-01

    Full Text Available Introduction: There are conflicting reports on the relationship of lean mass (LM and fat mass (FM with bone mineral content (BMC. Given the high prevalence of Vitamin D deficiency in India, we planned the study to evaluate the relationship between LM and FM with BMC in Indian children and adolescents. The objective of the study was to evaluate the relationship of BMC with LM and FM. Materials and Methods: Total and regional BMC, LM, and FM using dual energy X-ray absorptiometry and pubertal staging were assessed in 1403 children and adolescents (boys [B]: 826; girls [G]: 577. BMC index, BMC/LM and BMC/FM ratio, were calculated. Results: The age ranged from 5 to 18 years, with a mean age of 13.2 ± 2.7 years. BMC adjusted for height (BMC index and BMC/height ratio was comparable in both genders. There was no difference in total BMC between genders in the prepubertal group but were higher in more advanced stages of pubertal maturation. The correlation of total as well as regional BMC was stronger for LM (B: Total BMC - 0.880, trunk - 0.715, leg - 0.894, arm - 0.891; G: Total BMC - 0.827, leg - 0.846, arm - 0.815 (all value indicate r2 , P < 0.0001 for all when compared with FM (B: Total BMC - 0.776, trunk - 0.676, leg - 0.772, arm - 0.728; G: Total BMC - 0.781, leg - 0.741, arm - 0.689; all P < 0.0001 except at trunk BMC (LM - 0.682 vs. FM - 0.721; all P < 0.0001, even after controlling for age, height, pubertal stage, and biochemical parameters. Conclusions: BMC had a stronger positive correlation with LM than FM.

  5. 某项目测量数据库的建立与应用%Measurement database build up and its application in A1 project in the North China exploration area

    Institute of Scientific and Technical Information of China (English)

    刘素花; 满雪峰; 于九申; 刘先玲; 庞福建

    2012-01-01

    对测量数据资料进行科学化、规范化、可视化的管理,可以及时、准确、直观地为管理部门和勘探设计人员提供以往探区资料的分布和现状。本文详细地介绍了某探区物探测量数据库的数据来源、建立方法以及数据库建成后在勘探部署、工程设计和联合处理等方面的应用。%Under scientific, standardized and visual management, surveying data can be timely, accurately and intuitively to provide the distribution and status of the previous exploration area. This paper introduced the data source of the surveying database and how to set up the database in detail. In addition, the paper described its practical application in exploration deployment, project design and cooperation processing in the North China exploration area.

  6. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RMG Database... Description General information of database Database name RMG Alternative name Rice Mitochondri...ational Institute of Agrobiological Sciences E-mail : Database classification Nucleotide Sequence Databases ...Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database description This database co...e of rice mitochondrial genome and information on the analysis results. Features and manner of utilization of database

  7. Bioinformatics glossary based Database of Biological Databases: DBD

    OpenAIRE

    Siva Kiran RR; Setty MVN; Hanumatha Rao G

    2009-01-01

    Database of Biological/Bioinformatics Databases (DBD) is a collection of 1669 databases and online resources collected from NAR Database Summary Papers (http://www.oxfordjournals.org/nar/database/a/) & Internet search engines. The database has been developed based on 437 keywords (Glossary) available in http://falcon.roswellpark.org/labweb/glossary.html. Keywords with their relevant databases are arranged in alphabetic order which enables quick accession of databases by researchers. Dat...

  8. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  9. Genome Statute and Legislation Database

    Science.gov (United States)

    ... Database Welcome to the Genome Statute and Legislation Database The Genome Statute and Legislation Database is comprised ... the National Society of Genetic Counselors . Search the Database Search Tips You may select one or more ...

  10. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  11. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  12. 《中国近现代史纲要》课试题库建设十大关系研究%Ten Relationships in Building Exam Database in Teaching Chinese Contemporary History

    Institute of Scientific and Technical Information of China (English)

    刘永春; 郑亚男; 高京平

    2011-01-01

    由于"试题库模式"具有成本低、见效快、易操作等优势,因此,正在成为承载和推动"中国近现代史纲要"考试制度改革的重要和有效形式。但是,因为考试连接着学校与社会、教学与管理、教师与学生、教学与反馈等环节,因此,在"纲要"课试题库建设中,必须从课程性质和特色出发,以系统论的思维,揭示、把握和体现考试与社会需要、国家需要、教育需要、学生需要和人的需要之间的内在节律。实践证明,理顺这其中的十大关系,是考试制度改革的前提和动力。%With the advantages of low cost,effectiveness and easy operation,exam database has become a major approach to reform the testing system of Chinese Contemporary History.Since an examination has close ties with universities and society,teaching and administration,teachers and students,teaching and feedbacks,the exam database must start with the properties and characteristics of this course,adopt systemic thinking to reveal,control and reflect the internal laws between examination and the needs of country,education and students.These relations have become the premise and serve as the driving force to reform the testing system.

  13. MODELING A GEO-SPATIAL DATABASE FOR MANAGING TRAVELERS’ DEMAND

    Directory of Open Access Journals (Sweden)

    Sunil Pratap Singh

    2014-04-01

    Full Text Available The geo-spatial database is a new technology in database systems which allow storing, retrieving and maintaining the spatial data. In this paper, we seek to design and implement a geo-spatial database for managing the traveler’s demand with the aid of open-source tools and object-relational database package. The building of geo-spatial database starts with the design of data model in terms of conceptual, logical and physical data model and then the design has been implemented into an object-relational database. The geo-spatial database is developed to facilitate the storage of geographic information (where things are with descriptive information (what things are like into the vector model. The developed vector geo-spatial data can be accessed and rendered in the form of map to create the awareness of existence of various services and facilities for prospective travelers and visitors.

  14. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  15. Database management systems

    CERN Document Server

    Pallaw, Vijay Krishna

    2010-01-01

    The text covers the fundamental concept and a complete guide to the prac- tical implementation of Database Management Systems. Concepts includes SQL, PL/SQL. These concepts include aspects of Database design, Data- base Languages, and Database System implementation. The entire book is divided into five units to ensure the smooth flow of the subject. The extra methodology makes it very useful for students as well as teachers.

  16. An organic database system

    OpenAIRE

    Kersten, Martin; Siebes, Arno

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, ground breaking results are obtained in Quantum- and DNA-computing using nature as inspiration for its computational models. This paper provides a vision on a new brand of database architectures, i.e....

  17. Database Application Schema Forensics

    OpenAIRE

    Hector Quintus Beyers; Olivier, Martin S; Hancke, Gerhard P.

    2014-01-01

    The application schema layer of a Database Management System (DBMS) can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic ...

  18. Web Technologies And Databases

    OpenAIRE

    Irina-Nicoleta Odoraba

    2011-01-01

    The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS) - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server. Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications...

  19. Fingerprint databases for theorems

    OpenAIRE

    Billey, Sara C.; Tenner, Bridget E.

    2013-01-01

    We discuss the advantages of searchable, collaborative, language-independent databases of mathematical results, indexed by "fingerprints" of small and canonical data. Our motivating example is Neil Sloane's massively influential On-Line Encyclopedia of Integer Sequences. We hope to encourage the greater mathematical community to search for the appropriate fingerprints within each discipline, and to compile fingerprint databases of results wherever possible. The benefits of these databases are...

  20. Categorical Database Generalization

    Institute of Scientific and Technical Information of China (English)

    LIU Yaolin; Martin Molenaar; AI Tinghua; LIU Yanfang

    2003-01-01

    This paper focuses on the issues of categorical database gen-eralization and emphasizes the roles ofsupporting data model, integrated datamodel, spatial analysis and semanticanalysis in database generalization.The framework contents of categoricaldatabase generalization transformationare defined. This paper presents an in-tegrated spatial supporting data struc-ture, a semantic supporting model andsimilarity model for the categorical da-tabase generalization. The concept oftransformation unit is proposed in generalization.

  1. Nuclear Science References Database

    OpenAIRE

    PRITYCHENKO B.; Běták, E.; B. Singh; Totans, J.

    2013-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...

  2. Searching Databases with Keywords

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Kun-Long Zhang

    2005-01-01

    Traditionally, SQL query language is used to search the data in databases. However, it is inappropriate for end-users, since it is complex and hard to learn. It is the need of end-user, searching in databases with keywords, like in web search engines. This paper presents a survey of work on keyword search in databases. It also includes a brief introduction to the SEEKER system which has been developed.

  3. On Intelligent Database Systems

    OpenAIRE

    Dennis McLeod; Paul Yanover

    1992-01-01

    In response to the limitations of comtemporary database management systems in addressing the requirements of many potential application environments, and in view of the characteristics of emerging interconnected systems, we examine research directions involving adding more ‘intelligence’ to database systems. Three major thrusts in the intelligent database systems area are discussed. The first involves increasing the modeling power to represent an application environment. The second emphasis c...

  4. 76 FR 74050 - Measured Building Energy Performance Data Taxonomy

    Science.gov (United States)

    2011-11-30

    ... Office of Energy Efficiency and Renewable Energy Measured Building Energy Performance Data Taxonomy... related to a measured building energy performance data taxonomy. DOE has created this measured building energy performance data taxonomy as part of its DOE Buildings Performance Database project....

  5. Rethinking the global secondary organic aerosol (SOA) budget: stronger production, faster removal, shorter lifetime

    Science.gov (United States)

    Hodzic, Alma; Kasibhatla, Prasad S.; Jo, Duseong S.; Cappa, Christopher D.; Jimenez, Jose L.; Madronich, Sasha; Park, Rokjin J.

    2016-06-01

    Recent laboratory studies suggest that secondary organic aerosol (SOA) formation rates are higher than assumed in current models. There is also evidence that SOA removal by dry and wet deposition occurs more efficiently than some current models suggest and that photolysis and heterogeneous oxidation may be important (but currently ignored) SOA sinks. Here, we have updated the global GEOS-Chem model to include this new information on formation (i.e., wall-corrected yields and emissions of semi-volatile and intermediate volatility organic compounds) and on removal processes (photolysis and heterogeneous oxidation). We compare simulated SOA from various model configurations against ground, aircraft and satellite measurements to assess the extent to which these improved representations of SOA formation and removal processes are consistent with observed characteristics of the SOA distribution. The updated model presents a more dynamic picture of the life cycle of atmospheric SOA, with production rates 3.9 times higher and sinks a factor of 3.6 more efficient than in the base model. In particular, the updated model predicts larger SOA concentrations in the boundary layer and lower concentrations in the upper troposphere, leading to better agreement with surface and aircraft measurements of organic aerosol compared to the base model. Our analysis thus suggests that the long-standing discrepancy in model predictions of the vertical SOA distribution can now be resolved, at least in part, by a stronger source and stronger sinks leading to a shorter lifetime. The predicted global SOA burden in the updated model is 0.88 Tg and the corresponding direct radiative effect at top of the atmosphere is -0.33 W m-2, which is comparable to recent model estimates constrained by observations. The updated model predicts a population-weighed global mean surface SOA concentration that is a factor of 2 higher than in the base model, suggesting the need for a reanalysis of the contribution of

  6. Can a bog drained for forestry be a stronger carbon sink than a natural bog forest?

    Directory of Open Access Journals (Sweden)

    J. Hommeltenberg

    2014-02-01

    Full Text Available This study compares the CO2 exchange of a natural bog forest, and of a bog drained for forestry in the pre-alpine region of southern Germany. The sites are separated by only ten kilometers, they share the same formation history and are exposed to the same climate and weather conditions. In contrast, they differ in land use history: at the Schechenfilz site a natural bog-pine forest (Pinus mugo rotundata grows on an undisturbed, about 5 m thick peat layer; at Mooseurach a planted spruce forest (Picea abies grows on drained and degraded peat (3.4 m. The net ecosystem exchange of CO2 (NEE at both sites has been investigated for two years (July 2010 to June 2012, using the eddy covariance technique. Our results indicate that the drained, forested bog at Mooseurach is a much stronger carbon dioxide sink (−130 ± 31 and −300 ± 66 g C m−2 a−1 in the first and second year respectively than the natural bog forest at Schechenfilz (−53 ± 28 and −73±38 g C m−2 a−1. The strong net CO2 uptake can be explained by the high gross primary productivity of the spruces that over-compensates the two times stronger ecosystem respiration at the drained site. The larger productivity of the spruces can be clearly attributed to the larger LAI of the spruce site. However, even though current flux measurements indicate strong CO2 uptake of the drained spruce forest, the site is a strong net CO2 source, if the whole life-cycle, since forest planting is considered. We determined the difference between carbon fixation by the spruces and the carbon loss from the peat due to drainage since forest planting. The estimate resulted in a strong carbon release of +156 t C ha−1 within the last 44 yr, means the spruces would need to grow for another 100 yr, at the current rate, to compensate the peat loss of the former years. In contrast, the natural bog-pine ecosystem has likely been a small but consistent carbon sink for decades, which our results suggest is very

  7. Can a bog drained for forestry be a stronger carbon sink than a natural bog forest?

    Science.gov (United States)

    Hommeltenberg, J.; Schmid, H. P.; Drösler, M.; Werle, P.

    2014-07-01

    This study compares the CO2 exchange of a natural bog forest, and of a bog drained for forestry in the pre-Alpine region of southern Germany. The sites are separated by only 10 km, they share the same soil formation history and are exposed to the same climate and weather conditions. In contrast, they differ in land use history: at the Schechenfilz site a natural bog-pine forest (Pinus mugo ssp. rotundata) grows on an undisturbed, about 5 m thick peat layer; at Mooseurach a planted spruce forest (Picea abies) grows on drained and degraded peat (3.4 m). The net ecosystem exchange of CO2 (NEE) at both sites has been investigated for 2 years (July 2010-June 2012), using the eddy covariance technique. Our results indicate that the drained, forested bog at Mooseurach is a much stronger carbon dioxide sink (-130 ± 31 and -300 ± 66 g C m-2 a-1 in the first and second year, respectively) than the natural bog forest at Schechenfilz (-53 ± 28 and -73 ± 38 g C m-2 a-1). The strong net CO2 uptake can be explained by the high gross primary productivity of the 44-year old spruces that over-compensates the two-times stronger ecosystem respiration at the drained site. The larger productivity of the spruces can be clearly attributed to the larger plant area index (PAI) of the spruce site. However, even though current flux measurements indicate strong CO2 uptake of the drained spruce forest, the site is a strong net CO2 source when the whole life-cycle since forest planting is considered. It is important to access this result in terms of the long-term biome balance. To do so, we used historical data to estimate the difference between carbon fixation by the spruces and the carbon loss from the peat due to drainage since forest planting. This rough estimate indicates a strong carbon release of +134 t C ha-1 within the last 44 years. Thus, the spruces would need to grow for another 100 years at about the current rate, to compensate the potential peat loss of the former years. In

  8. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  9. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  10. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.;

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  11. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  12. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  13. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  14. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  15. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  16. Brain Potentials Highlight Stronger Implicit Food Memory for Taste than Health and Context Associations.

    Directory of Open Access Journals (Sweden)

    Heleen R Hoogeveen

    Full Text Available Increasingly consumption of healthy foods is advised to improve population health. Reasons people give for choosing one food over another suggest that non-sensory features like health aspects are appreciated as of lower importance than taste. However, many food choices are made in the absence of the actual perception of a food's sensory properties, and therefore highly rely on previous experiences of similar consumptions stored in memory. In this study we assessed the differential strength of food associations implicitly stored in memory, using an associative priming paradigm. Participants (N = 30 were exposed to a forced-choice picture-categorization task, in which the food or non-food target images were primed with either non-sensory or sensory related words. We observed a smaller N400 amplitude at the parietal electrodes when categorizing food as compared to non-food images. While this effect was enhanced by the presentation of a food-related word prime during food trials, the primes had no effect in the non-food trials. More specifically, we found that sensory associations are stronger implicitly represented in memory as compared to non-sensory associations. Thus, this study highlights the neuronal mechanisms underlying previous observations that sensory associations are important features of food memory, and therefore a primary motive in food choice.

  17. Brain Potentials Highlight Stronger Implicit Food Memory for Taste than Health and Context Associations.

    Science.gov (United States)

    Hoogeveen, Heleen R; Jolij, Jacob; Ter Horst, Gert J; Lorist, Monicque M

    2016-01-01

    Increasingly consumption of healthy foods is advised to improve population health. Reasons people give for choosing one food over another suggest that non-sensory features like health aspects are appreciated as of lower importance than taste. However, many food choices are made in the absence of the actual perception of a food's sensory properties, and therefore highly rely on previous experiences of similar consumptions stored in memory. In this study we assessed the differential strength of food associations implicitly stored in memory, using an associative priming paradigm. Participants (N = 30) were exposed to a forced-choice picture-categorization task, in which the food or non-food target images were primed with either non-sensory or sensory related words. We observed a smaller N400 amplitude at the parietal electrodes when categorizing food as compared to non-food images. While this effect was enhanced by the presentation of a food-related word prime during food trials, the primes had no effect in the non-food trials. More specifically, we found that sensory associations are stronger implicitly represented in memory as compared to non-sensory associations. Thus, this study highlights the neuronal mechanisms underlying previous observations that sensory associations are important features of food memory, and therefore a primary motive in food choice. PMID:27213567

  18. Stronger activation of SREBP-1a by nucleus-localized HBx

    International Nuclear Information System (INIS)

    We previously showed that hepatitis B virus (HBV) X protein activates the sterol regulatory element-binding protein-1a (SREBP-1a). Here we examined the role of nuclear localization of HBx in this process. In comparison to the wild-type and cytoplasmic HBx, nuclear HBx had stronger effects on SREBP-1a and fatty acid synthase transcription activation, intracellular lipid accumulation and cell proliferation. Furthermore, nuclear HBx could activate HBV enhancer I/X promoter and was more effective on up-regulating HBV mRNA level in the context of HBV replication than the wild-type HBx, while the cytoplasmic HBx had no effect. Our results demonstrate the functional significance of the nucleus-localized HBx in regulating host lipogenic pathway and HBV replication. - Highlights: • Nuclear HBx is more effective on activating SREBP-1a and FASN transcription. • Nuclear HBx is more effective on enhancing intracellular lipid accumulation. • Nuclear HBx is more effective on enhancing cell proliferation. • Nuclear HBx up-regulates HBV enhancer I/X promoter activity. • Nuclear HBx increases HBV mRNA level in the context of HBV replication

  19. Stronger activation of SREBP-1a by nucleus-localized HBx

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qi [VIDO-InterVac, Veterinary Microbiology, University of Saskatchewan, Saskatoon (Canada); Qiao, Ling [VIDO-InterVac, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Yang, Jian [Drug Discovery Group, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Zhou, Yan [VIDO-InterVac, Veterinary Microbiology, Vaccinology and Immunotherapeutics, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Liu, Qiang, E-mail: qiang.liu@usask.ca [VIDO-InterVac, Veterinary Microbiology, Vaccinology and Immunotherapeutics, University of Saskatchewan, Saskatoon, Saskatchewan (Canada)

    2015-05-08

    We previously showed that hepatitis B virus (HBV) X protein activates the sterol regulatory element-binding protein-1a (SREBP-1a). Here we examined the role of nuclear localization of HBx in this process. In comparison to the wild-type and cytoplasmic HBx, nuclear HBx had stronger effects on SREBP-1a and fatty acid synthase transcription activation, intracellular lipid accumulation and cell proliferation. Furthermore, nuclear HBx could activate HBV enhancer I/X promoter and was more effective on up-regulating HBV mRNA level in the context of HBV replication than the wild-type HBx, while the cytoplasmic HBx had no effect. Our results demonstrate the functional significance of the nucleus-localized HBx in regulating host lipogenic pathway and HBV replication. - Highlights: • Nuclear HBx is more effective on activating SREBP-1a and FASN transcription. • Nuclear HBx is more effective on enhancing intracellular lipid accumulation. • Nuclear HBx is more effective on enhancing cell proliferation. • Nuclear HBx up-regulates HBV enhancer I/X promoter activity. • Nuclear HBx increases HBV mRNA level in the context of HBV replication.

  20. Recommendations on Formative Assessment and Feedback Practices for stronger engagement in MOOCs

    Directory of Open Access Journals (Sweden)

    Nikolaos Floratos

    2015-04-01

    Full Text Available Many publications and surveys refer to the high drop out rate in Massive Open Online Courses (MOOCs which is around 90%, especially if we compare the number of students who register against those who finish. Working towards improving student engagement in MOOCs, we focus on providing specific research-based recommendations on formative assessment and feedback practices that can advance student activity. In this respect, we analysed some significant research papers on formative assessment and feedback methods applicable to face-to-face teaching environments that advance student engagement, and concluded with related requirements and conditions that can be applied also to MOOCs. We also analysed 4050 comments and reviews of the seven most active and highly rated MOOCs (6 Coursera ones and 1 from EdX provided by the students who have mainly completed those courses via CourseTalk. Based on this content analysis, we have formulated fourteen recommendations that support also the requirements/conditions of our conceptual and theoretical framework analysis. The results obtained give some light in a rather unexplored research area, which is the research on formative assessment and feedback practices specifically for stronger engagement in MOOCs. http://dx.doi.org/10.5944/openpraxis.7.2.194

  1. Stronger controls needed to prevent terrorist 'dirty bombs'. Vienna conference urges better security, surveillance and regulation

    International Nuclear Information System (INIS)

    The International Conference on Security of Radioactive Sources was held from 10 to 13 March 2003 at the Hofburg Palace in Vienna, Austria. U.S. Secretary of Energy Spencer Abraham presided over the Conference, which was co-sponsored by the Government of the Russian Federation and the Government of the United States of America and hosted by the Government of Austria. It was organized by the IAEA in co-operation with the European Commission, the World Customs Organization, the International Criminal Police Organization (ICPO-Interpol) and the European Police Office (Europol). Over seven hundred delegates from more than 120 countries gathering in Vienna called today for stronger national and international security over radioactive sources, especially those that could be used to produce a terrorist 'dirty bomb'. 'High-risk radioactive sources that are not under secure and regulated control, including so-called orphan sources, raise serious security and safety concerns', the International Conference on Security of Radioactive Sources concluded. 'Effective national infrastructures for the safe and secure management of vulnerable and dangerous radioactive sources are essential for ensuring the long-term security and control of such sources'

  2. Reactor building

    International Nuclear Information System (INIS)

    The whole reactor building is accommodated in a shaft and is sealed level with the earth's surface by a building ceiling, which provides protection against penetration due to external effects. The building ceiling is supported on walls of the reactor building, which line the shaft and transfer the vertical components of forces to the foundations. The thickness of the walls is designed to withstand horizontal pressure waves in the floor. The building ceiling has an opening above the reactor, which must be closed by cover plates. Operating equipment for the reactor can be situated above the building ceiling. (orig./HP)

  3. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RPSD Database... Description General information of database Database name RPSD Alternative name Summary inform...n National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Database classification Structure Database...idopsis thaliana Taxonomy ID: 3702 Taxonomy Name: Glycine max Taxonomy ID: 3847 Database description We have...nts such as rice, and have put together the result and related informations. This database contains the basi

  4. GRAPH DATABASES AND GRAPH VIZUALIZATION

    OpenAIRE

    Klančar, Jure

    2013-01-01

    The thesis presents graph databases. Graph databases are a part of NoSQL databases, which is why this thesis presents basics of NoSQL databases as well. We have focused on advantages of graph databases compared to rela- tional databases. We have used one of native graph databases (Neo4j), to present more detailed processing of graph databases. To get more acquainted with graph databases and its principles, we developed a simple application that uses a Neo4j graph database to...

  5. CDS - Database Administrator's Guide

    Science.gov (United States)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  6. Directory of IAEA databases

    International Nuclear Information System (INIS)

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  7. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  8. An organic database system

    NARCIS (Netherlands)

    Kersten, M.L.; Siebes, A.P.J.M.

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, gro

  9. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  10. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  11. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  12. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  13. Neutrosophic Relational Database Decomposition

    OpenAIRE

    Meena Arora; Ranjit Biswas; Dr. U.S.Pandey

    2011-01-01

    In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  14. Building America

    Energy Technology Data Exchange (ETDEWEB)

    Brad Oberg

    2010-12-31

    IBACOS researched the constructability and viability issues of using high performance windows as one component of a larger approach to building houses that achieve the Building America 70% energy savings target.

  15. Protein sequence databases.

    Science.gov (United States)

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium. PMID:15036160

  16. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  17. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  18. Solar building

    OpenAIRE

    Zhang, Luxin

    2014-01-01

    In my thesis I describe the utilization of solar energy and solar energy with building integration. In introduction it is also mentioned how the solar building works, trying to make more people understand and accept the solar building. The thesis introduces different types of solar heat collectors. I compared the difference two operation modes of solar water heating system and created examples of solar water system selection. I also introduced other solar building applications. It is conv...

  19. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  20. Sharper, Stronger, Faster Upper Visual Field Representation in Primate Superior Colliculus.

    Science.gov (United States)

    Hafed, Ziad M; Chen, Chih-Yang

    2016-07-11

    Visually guided behavior in three-dimensional environments entails handling immensely different sensory and motor conditions across retinotopic visual field locations: peri-personal ("near") space is predominantly viewed through the lower retinotopic visual field (LVF), whereas extra-personal ("far") space encompasses the upper visual field (UVF). Thus, when, say, driving a car, orienting toward the instrument cluster below eye level is different from scanning an upcoming intersection, even with similarly sized eye movements. However, an overwhelming assumption about visuomotor circuits for eye-movement exploration, like those in the primate superior colliculus (SC), is that they represent visual space in a purely symmetric fashion across the horizontal meridian. Motivated by ecological constraints on visual exploration of far space, containing small UVF retinal-image features, here we found a large, multi-faceted difference in the SC's representation of the UVF versus LVF. Receptive fields are smaller, more finely tuned to image spatial structure, and more sensitive to image contrast for neurons representing the UVF. Stronger UVF responses also occur faster. Analysis of putative synaptic activity revealed a particularly categorical change when the horizontal meridian is crossed, and our observations correctly predicted novel eye-movement effects. Despite its appearance as a continuous layered sheet of neural tissue, the SC contains functional discontinuities between UVF and LVF representations, paralleling a physical discontinuity present in cortical visual areas. Our results motivate the recasting of structure-function relationships in the visual system from an ecological perspective, and also exemplify strong coherence between brain-circuit organization for visually guided exploration and the nature of the three-dimensional environment in which we function. PMID:27291052

  1. Stronger signal of recent selection for lactase persistence in Maasai than in Europeans.

    Science.gov (United States)

    Schlebusch, Carina M; Sjödin, Per; Skoglund, Pontus; Jakobsson, Mattias

    2013-05-01

    Continued ability to digest lactose after weaning provides a possible selective advantage to individuals who have access to milk as a food source. The lactase persistence (LP) phenotype exists at varying frequencies in different populations and SNPs that modulate the regulation of the LCT gene have been identified in many of these populations. Very strong positive selection for LP has been illustrated for a single SNP (rs4988235) in northwestern European populations, which has become a textbook example of the effect of recent selective sweeps on genetic variation and linkage disequilibrium. In this study, we employed two different methods to detect signatures of positive selection in an East African pastoralist population in the HapMap collection, the Maasai from Kenya, and compared results with other HapMap populations. We found that signatures of recent selection coinciding with the LCT gene are the strongest across the genome in the Maasai population. Furthermore, the genome-wide signal of recent positive selection on haplotypic variation and population differentiation around the LCT gene is greater in the Maasai than in the CEU population (northwestern European descent), possibly due to stronger selection pressure, but it could also be an indication of more recent selection in Maasai compared with the Central European group or more efficient selection in the Maasai due to less genetic drift for their larger effective population size. This signal of recent selection is driven by a putative East African LP haplotype that is different from the haplotype that contributes to the LP phenotype in northwestern Europe. PMID:22948027

  2. Protective effect and mechanism of stronger neo-minophagen C against fulminant hepatic failure

    Institute of Scientific and Technical Information of China (English)

    Bao-Shan Yang; Ying-Ji Ma; Yan Wang; Li-Yan Chen; Man-Ru Bi; Bing-Zhu Yan; Lu Bai; Hui Zhou; Fu-Xiang Wang

    2007-01-01

    AIM: To investigate the protective effect of stronger neo-minophafen C (SNMC) on fulminant hepatic failure (FHF) and its underlying mechanism.METHODS: A mouse model of FHF was established by intraperitoneal injection of galactosamine (D-Gal N) and lipopolysaccharide (LPS). The survival rate, liver function,inflammatory factor and liver pathological change were obtained with and without SNMC treatment. Hepatocyte survival was estimated by observing the stained mitochondria structure with terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate fluorescence nick end labeling (TUNEL) method and antibodies against cytochrome C (Cyt-C) and caspase-3.RESULTS: The levels of plasma tumor necrosis factor alpha (TNF-α), nitric oxide (NO), ET-1, interleukin-6(IL-6), and the degree of hepatic tissue injury were decreased in the SNMC-treated groups compared with those in the model group (P < 0.01). However, there were no differences after different dosages administered at different time points. There was a significant difference in survival rates between the SNMC-treated groups and the model group (P < 0.01). The apoptosis index was 32.3% at 6 h after a low dose of SNMC, which was considerably decreased from 32.3%±4.7% vs 5%± 2.83% (P<0.05) to 5% on d 7. The expression of Cyt-C and caspase-3 decreased with the prolongation of therapeutic time. Typical hepatocyte apoptosis was obviously ameliorated under electron microscope with the prolongation of therapeutic time.CONCLUSION: SNMC can effectively protect liver against FHF induced by LPS/D-Gal N. SNMC can prevent hepatocyte apoptosis by inhibiting inflammatory reaction and stabilizing mitochondria membrane to suppress the release of Cyt-C and sequent activation of caspase-3.

  3. Building 2000

    International Nuclear Information System (INIS)

    This is the first volume of Building 2000, a pilot project of the Commission's R and D-programme 'Solar Energy Applications to Buildings' with the purpose of encouraging the adoption of solar architecture in large buildings. In this first rich illustrated volume the results of the design studies illustrating passive solar architecture in buildings in the European Community are presented in particular for the building categories as mentioned in the subtitle. In a second volume, a similar series of studies is presented for the building categories: office buildings, public buildings and hotels and holiday complexes. Several Design Support Workshops were organized during the Building 2000 programme during which Building 2000 design teams could directly exchange ideas with the various design advice experts represented at these workshops. In the second part of the Building 2000 final report a summary of a selection of many reports is presented (15 papers), as produced by Design Support experts. Most of the design support activities resulted in changes of the various designs, as have been reported by the design teams in the brochures presented in the first part of this book. It is to be expected that design aids and simulation tools for passive solar options, daylighting concepts, comfort criteria etc., will be utilized more frequently in the future. This will result in a better exchange of information between the actual design practitioners and the European R and D community. This technology transfer will result in buildings with a higher quality with respect to energy and environmental issues

  4. The integrated web service and genome database for agricultural plants with biotechnology information

    OpenAIRE

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information...

  5. Database development for the potential evaluation of national resource in China and its significance

    International Nuclear Information System (INIS)

    This paper introduces the development processes of database for the evaluation of national uranium resources potential in the aspects of database structure, map edit, data entry and data checkup. The database has been build-up with the integration technology of organization, storage and management, which is important and significant for future data management and dynamic evaluation of national uranium resources potential. (authors)

  6. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  7. Hazard Analysis Database Report

    International Nuclear Information System (INIS)

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification

  8. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  9. Search Algorithms for Conceptual Graph Databases

    Directory of Open Access Journals (Sweden)

    Abdurashid Mamadolimov

    2013-03-01

    Full Text Available We consider a database composed of a set of conceptual graphs. Using conceptual graphs and graphhomomorphism it is possible to build a basic query-answering mechanism based on semantic search.Graph homomorphism defines a partial order over conceptual graphs. Since graph homomorphismchecking is an NP-Complete problem, the main requirement for database organizing and managingalgorithms is to reduce the number of homomorphism checks. Searching is a basic operation for databasemanipulating problems. We consider the problem of searching for an element in a partially ordered set.The goal is to minimize the number of queries required to find a target element in the worst case. First weanalyse conceptual graph database operations. Then we propose a new algorithm for a subclass of lattices.Finally, we suggest a parallel search algorithm for a general poset.

  10. Cooperative answers in database systems

    Science.gov (United States)

    Gaasterland, Terry; Godfrey, Parke; Minker, Jack; Novik, Lev

    1993-01-01

    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system.

  11. Hellenic Woodland Database

    OpenAIRE

    Fotiadis, Georgios; Tsiripidis, Ioannis; Bergmeier, Erwin; Dimopolous, Panayotis

    2012-01-01

    The Hellenic Woodland Database (GIVD ID EU-GR-006) includes relevés from 59 sources, approximately, as well as unpublished relevés. In total 4,571 relevés have already been entered in the database, but the database is going to continue growing in the near future. Species abundances are recorded according the 7-grade Braun-Blanquet scale. The oldest relevés date back to 1963. For the majority of relevés (more than 90%) environmental data (e.g. altitude, slope aspect, inclination) exis...

  12. MySQL Database

    OpenAIRE

    Jimoh, Morufu

    2010-01-01

    The main objectives of this thesis were to show how it is much easier and faster to find required information from computer database than from other data storage systems or old fashioned way. We will be able to add, retrieve and update data in a computer database easily. Using computer database for company to keep the information for their customer is the objective of my thesis. It is faster which makes it economically a better solution. The project has six tables which are branch, st...

  13. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  14. Unit 43 - Database Concepts I

    OpenAIRE

    Unit 61, CC in GIS; White, Gerald (ACER)

    1990-01-01

    This unit outlines fundamental concepts in database systems and their integration with GIS, including advantages of a database approach, views of a database, database management systems (DBMS), and alternative database models. Three models—hierarchical, network and relational—are discussed in greater detail.

  15. Some operations on database universes

    OpenAIRE

    Brock, E.O. de

    1997-01-01

    Operations such as integration or modularization of databases can be considered as operations on database universes. This paper describes some operations on database universes. Formally, a database universe is a special kind of table. It turns out that various operations on tables constitute interesting operations on database universes as well.

  16. APPLICATION OF GEOGRAPHICAL PARAMETER DATABASE TO ESTABLISHMENT OF UNIT POPULATION DATABASE

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Now GIS is turning into a good tool in handling geographical, economical, and population data, so we can obtain more and more information from these data. On the other hand, in some cases, for a calamity, such as hurricane, earthquake, flood, drought etc., or a decision-making, such as setting up a broadcasting transmitter, building a chemical plant etc., we have to evaluate the total population in the region influenced by a calamity or a project. In this paper, a method is put forward to evaluate the population in such special region. Through exploring the correlation of geographical parameters and the distribution of people in the same region by means of quantitative analysis and qualitative analysis, unit population database (1km× 1km) is established. In this way, estimating the number of people in a special region is capable by adding up the population in every grid involved in this region boundary. The geographical parameters are obtained from topographic database and DEM database on the scale of 1∶ 250 000. The fundamental geographical parameter database covering county administrative boundaries and 1km× 1km grid is set up and the population database at county level is set up as well. Both geographical parameter database and unit population database are able to offer sufficient conditions for quantitative analysis. They will have important role in the research fields of data mining (DM), Decision-making Support Systems (DSS), and regional sustainable development.

  17. Data management for biofied building

    Science.gov (United States)

    Matsuura, Kohta; Mita, Akira

    2015-03-01

    Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.

  18. Navigating public microarray databases.

    Science.gov (United States)

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources. PMID:18629145

  19. Dietary Supplement Label Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The database is designed to help both the general public and health care providers find information about ingredients in brand-name products, including name, form,...

  20. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  2. Nuclear Science References Database

    International Nuclear Information System (INIS)

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  3. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  4. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  5. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  6. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication....

  7. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  8. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states...

  9. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  10. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  11. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  12. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information...

  13. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  14. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  15. Eldercare Locator Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Eldercare Locator is a searchable database that allows a user to search via zip code or city/ state for agencies at the State and local levels that provide...

  16. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  17. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  18. Disaster Debris Recovery Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 3,500 composting facilities, demolition contractors, haulers, transfer...

  19. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  20. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  1. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær;

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200...... women in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010...

  2. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment...

  3. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  4. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  5. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  6. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  7. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias; Papaeconomou, Eirini; Gani, Rafiqul

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The...... database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups in...... the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  8. Building 2000

    International Nuclear Information System (INIS)

    This is the second volume of Building 2000, a pilot project of the Commission's R and D-programme 'Solar Energy Applications to Buildings' with the purpose of encouraging the adoption of solar architecture in large buildings. In this second rich illustrated volume the results of the design studies illustrating passive solar architecture in buildings in the European Community are presented in particular for the building categories as mentioned in the subtitle. In the first volume, a similar series of studies is presented for the building categories: schools, laboratories and universities, and sports and educational centres. Several Design Support Workshops were organized during the Building 2000 programme during which Building 2000 design teams could directly exchange ideas with the various design advice experts represented at these workshops. In the second part of the Building 2000 final report a summary of a selection of many reports is presented (11 papers), as produced by Design Support experts. Most of the design support activities resulted in changes of the various designs, as have been reported by the design teams in the brochures presented in the first part of this book. It is to be expected that design aids and simulation tools for passive solar options, daylighting concepts, comfort criteria etc., will be utilized more frequently in the future. This will result in a better exchange of information between the actual design practitioners and the European R and D community. This technology transfer will result in buildings with a higher quality with respect to energy and environmental issues

  9. Clinical Genomic Database

    OpenAIRE

    Solomon, Benjamin D.; Nguyen, Anh-Dao; Bear, Kelly A.; Wolfsberg, Tyra G.

    2013-01-01

    Technological advances have greatly increased the availability of human genomic sequencing. However, the capacity to analyze genomic data in a clinically meaningful way lags behind the ability to generate such data. To help address this obstacle, we reviewed all conditions with genetic causes and constructed the Clinical Genomic Database (CGD) (http://research.nhgri.nih.gov/CGD/), a searchable, freely Web-accessible database of conditions based on the clinical utility of genetic diagnosis and...

  10. ORACLE DATABASE SECURITY

    OpenAIRE

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  11. Fashion Information Database

    Institute of Scientific and Technical Information of China (English)

    LI Jun; WU Hai-yan; WANG Yun-yi

    2002-01-01

    In the field of fashion industry, it is a bottleneck of how to control and apply the information in the procedure of fashion merchandising. By the aid of digital technology,a perfect and practical fashion information database could be established so that high- quality and efficient,low-cost and characteristic fashion merchandising system could be realized. The basic structure of fashion information database is discussed.

  12. Database computing in HEP

    International Nuclear Information System (INIS)

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  13. Taxes in Europe Database

    OpenAIRE

    European Commission DG Taxation and Customs Union

    2009-01-01

    The Taxes in Europe database is the European Commission's on-line information tool covering the main taxes in force in the EU Member States. Access is free for all users. The system contains information on around 650 taxes, as provided to the European Commission by the national authorities. The "Taxes in Europe" database contains, for each individual tax, information on its legal basis, assessment base, main exemptions, applicable rate(s), economic and statistical classification, as well as t...

  14. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik;

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  15. Automatic Detection of Buildings and Changes in Buildings for Updating of Maps

    Directory of Open Access Journals (Sweden)

    Harri Kaartinen

    2010-04-01

    Full Text Available There is currently high interest in developing automated methods to assist the updating of map databases. This study presents methods for automatic detection of buildings and changes in buildings from airborne laser scanner and digital aerial image data and shows the potential usefulness of the methods with thorough experiments in a 5 km2 suburban study area. 96% of buildings larger than 60 m2 were correctly detected in the building detection. The completeness and correctness of the change detection for buildings larger than 60 m2 were about 85% (including five classes. Most of the errors occurred in small or otherwise problematic buildings.

  16. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Yeast Interacting Proteins Database Update History of This Database Date Update contents 2010/03/29 Yeast In...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History

  17. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  18. Cloud Databases: A Paradigm Shift in Databases

    OpenAIRE

    Indu Arora; Anu Gupta

    2012-01-01

    Relational databases ruled the Information Technology (IT) industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of Wor...

  19. A Secure Database Encryption Scheme

    OpenAIRE

    Zongkai Yang; Samba Sesay; Jingwen Chen; Du Xu

    2004-01-01

    The need to protect database, would be an every growing one especially so in this age of e-commerce. Many conventional database security systems are bugged with holes that can be used by attackers to penetrate the database. No matter what degree of security is put in place, sensitive data in database are still vulnerable to attack. To avoid the risk posed by this threat, database encryption has been recommended. However encrypting all of database item will greatly degrade ...

  20. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  1. DCC Briefing Paper: Database archiving

    OpenAIRE

    Müller, Heiko

    2009-01-01

    In a computational context, data archiving refers to the storage of electronic documents, data sets, multimedia files, and so on, for a defined period of time. Database archiving is usually seen as a subset of data archiving. Database archiving focuses on archiving data that are maintained under the control of a database management system and structured under a database schema, e.g., a relational database. The primary goal of database archiving is to maintain access to data in case it is late...

  2. Laboratory Building.

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Joshua M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    This report is an analysis of the means of egress and life safety requirements for the laboratory building. The building is located at Sandia National Laboratories (SNL) in Albuquerque, NM. The report includes a prescriptive-based analysis as well as a performance-based analysis. Following the analysis are appendices which contain maps of the laboratory building used throughout the analysis. The top of all the maps is assumed to be north.

  3. Beyond 1-Safety and 2-Safety for Replicated Databases: Group-Safety

    OpenAIRE

    Wiesmann, M.; Schiper, A.

    2003-01-01

    In this paper, we study the safety guarantees of group communication-based database replication techniques. We show that there is a model mismatch between group communication and database, and because of this, classical group communication systems cannot be used to build 2-safe database replication. We propose a new group communication primitive called \\emph{end-to-end atomic broadcast} that solves the problem, i.e., can be used to implement 2-safe database replication. We also introduce...

  4. REXUS/BEXUS: launching student experiments -a step towards a stronger space science community

    Science.gov (United States)

    Fittock, Mark; Stamminger, Andreas; Maria, Roth; Dannenberg, Kristine; Page, Helen

    The REXUS/BEXUS (Rocket/Balloon Experiments for University Students) programme pro-vides opportunities to teams of European student scientists and engineers to fly experiments on sounding rockets and high altitude balloons. This is an opportunity for students and the scientific community to benefit from encouragement and support for experiments. An important feature of the programme is that the students experience a full project life-cycle which is typically not a part of their university education and which helps to prepare them for further scientific work. They have to plan, organize, and control their project in order to develop and build up an experiment but must also work on the scientic aspects. Many of the students continue to work in the field on which they focused in the programme and can often build upon both the experience and the results from flight. Within the REXUS/BEXUS project cycle, they are encouraged to write and present papers about their experiments and results; increasing amounts of scientific output are seen from the students who participate. Not only do the students learn and develop from REXUS/BEXUS but the scientific community also reaps significant benefits. Another major benefit of the programme is the promotion that the students are able to bring to the whole space community. Not only are the public made more aware of advanced science and technical concepts but an advantage is present in the contact that the students who participate have to other university level students. Students are less restricted in their publicity and attract large public followings online as well as presenting themselves in more traditional media outlets. Many teams' creative approach to outreach is astonishing. The benefits are not only for the space science community as a whole; institutes, universities and departments can see increased interest following the support of participating students in the programme. The programme is realized under a bilateral Agency

  5. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  6. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell;

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval, and...... dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  7. Chemical Explosion Database

    Science.gov (United States)

    Johansson, Peder; Brachet, Nicolas

    2010-05-01

    A database containing information on chemical explosions, recorded and located by the International Data Center (IDC) of the CTBTO, should be established in the IDC prior to entry into force of the CTBT. Nearly all of the large chemical explosions occur in connection with mining activity. As a first step towards the establishment of this database, a survey of presumed mining areas where sufficiently large explosions are conducted has been done. This is dominated by the large coal mining areas like the Powder River (U.S.), Kuznetsk (Russia), Bowen (Australia) and Ekibastuz (Kazakhstan) basins. There are also several other smaller mining areas, in e.g. Scandinavia, Poland, Kazakhstan and Australia, with large enough explosions for detection. Events in the Reviewed Event Bulletin (REB) of the IDC that are located in or close to these mining areas, and which therefore are candidates for inclusion in the database, have been investigated. Comparison with a database of infrasound events has been done as many mining blasts generate strong infrasound signals and therefore also are included in the infrasound database. Currently there are 66 such REB events in 18 mining areas in the infrasound database. On a yearly basis several hundreds of events in mining areas have been recorded and included in the REB. Establishment of the database of chemical explosions requires confirmation and ground truth information from the States Parties regarding these events. For an explosion reported in the REB, the appropriate authority in whose country the explosion occurred is encouraged, on a voluntary basis, to seek out information on the explosion and communicate this information to the IDC.

  8. The Chandra Bibliography Database

    Science.gov (United States)

    Rots, A. H.; Winkelman, S. L.; Paltani, S.; Blecksmith, S. E.; Bright, J. D.

    2004-07-01

    Early in the mission, the Chandra Data Archive started the development of a bibliography database, tracking publications in refereed journals and on-line conference proceedings that are based on Chandra observations, allowing our users to link directly to articles in the ADS from our archive, and to link to the relevant data in the archive from the ADS entries. Subsequently, we have been working closely with the ADS and other data centers, in the context of the ADEC-ITWG, on standardizing the literature-data linking. We have also extended our bibliography database to include all Chandra-related articles and we are also keeping track of the number of citations of each paper. Obviously, in addition to providing valuable services to our users, this database allows us to extract a wide variety of statistical information. The project comprises five components: the bibliography database-proper, a maintenance database, an interactive maintenance tool, a user browsing interface, and a web services component for exchanging information with the ADS. All of these elements are nearly mission-independent and we intend make the package as a whole available for use by other data centers. The capabilities thus provided represent support for an essential component of the Virtual Observatory.

  9. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  10. Integrating Paleoecological Databases

    Science.gov (United States)

    Blois, Jessica; Goring, Simon; Smith, Alison

    2011-02-01

    Neotoma Consortium Workshop; Madison, Wisconsin, 23-26 September 2010 ; Paleoecology can contribute much to global change science, as paleontological records provide rich information about species range shifts, changes in vegetation composition and productivity, aquatic and terrestrial ecosystem responses to abrupt climate change, and paleoclimate reconstruction, for example. However, while paleoecology is increasingly a multidisciplinary, multiproxy field focused on biotic responses to global change, most paleo databases focus on single-proxy groups. The Neotoma Paleoecology Database (http://www.neotomadb.org) aims to remedy this limitation by integrating discipline-specific databases to facilitate cross-community queries and analyses. In September, Neotoma consortium members and representatives from other databases and data communities met at the University of Wisconsin-Madison to launch the second development phase of Neotoma. The workshop brought together 54 international specialists, including Neotoma data stewards, users, and developers. Goals for the meeting were fourfold: (1) develop working plans for existing data communities; (2) identify new data types and sources; (3) enhance data access, visualization, and analysis on the Neotoma Web site; and (4) coordinate with other databases and cooperate in tool development and sharing.

  11. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  12. Human Performance Event Database

    International Nuclear Information System (INIS)

    The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)

  13. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us GETDB Database... Description General information of database Database name GETDB Alternative name Gal4 Enhancer Trap Insertion Database... +81-78-306-3183 E-mail: Database classification Expression Invertebrate genome database Organism Taxonomy N...ame: Drosophila melanogaster Taxonomy ID: 7227 Database description About 4,600 i...nsertion lines of enhancer trap lines based on the Gal4-UAS method were generated in Drosophila, and all of

  14. SURFACE: a database of protein surface regions for functional annotation

    OpenAIRE

    Ferrè, Fabrizio; Ausiello, Gabriele; Zanzoni, Andreas; Helmer-Citterich, Manuela

    2004-01-01

    The SURFACE (SUrface Residues and Functions Annotated, Compared and Evaluated, URL http://cbm.bio.uniroma2.it/surface/) database is a repository of annotated and compared protein surface regions. SURFACE contains the results of a large-scale protein annotation and local structural comparison project. A non-redundant set of protein chains is used to build a database of protein surface patches, defined as putative surface functional sites. Each patch is annotated with sequence and structure-der...

  15. A database devoted to the insects of the cultural heritage

    OpenAIRE

    Fabien Fohrer; Michel Martinez; Franck Dorkeld

    2011-01-01

    This database, implemented by both the CICRP and the INRA, gathers the most important pests affecting the cultural heritage. These insects represent a serious threat to the preservation of cultural properties such as museum collections, libraries and archives, movable objects and immovable objects in historical buildings. It is an easy tool for identifying the species of interest. It also permits very prompt undertaking of the required actions against the infestations. This database is of int...

  16. Constraint-based pattern mining in multi-relational databases

    OpenAIRE

    Nijssen, Siegfried; Jimenez, Aida; Guns, Tias

    2011-01-01

    We propose a new framework for constraint-based pattern mining in multi-relational databases. Distinguishing features of the framework are that (1) it allows finding patterns not only under anti-monotonic constraints, but also under monotonic constraints and closedness constraints, among others, expressed over complex aggregates over multiple relations; (2) it builds on a declarative graphical representation of constraints that links closely to data models of multi-relational databases and co...

  17. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  18. Web Technologies And Databases

    Directory of Open Access Journals (Sweden)

    Irina-Nicoleta Odoraba

    2011-04-01

    Full Text Available The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server.Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications need high performance DBMS's able to run multiple applications simultaneously. Hyper Text Markup Language (HTML is used to create hypertext documents for web pages. The purpose of HTML is rather the presentation of information – paragraphs, fonts, tables,than semantics description document.

  19. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina;

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated with...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  20. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based......Many of today’s farming systems are composed of purpose-built computerized farming devices such as spraying equipments, harvesters, fertilizer spreaders and so on. These devices produce large amounts of data. In most of the cases, it is essential to store data for longer time periods for analysis...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  1. Cytochrome P450 database.

    Science.gov (United States)

    Lisitsa, A V; Gusev, S A; Karuzina, I I; Archakov, A I; Koymans, L

    2001-01-01

    This paper describes a specialized database dedicated exclusively to the cytochrome P450 superfamily. The system provides the impression of superfamily's nomenclature and describes structure and function of different P450 enzymes. Information on P450-catalyzed reactions, substrate preferences, peculiarities of induction and inhibition is available through the database management system. Also the source genes and appropriate translated proteins can be retrieved together with corresponding literature references. Developed programming solution provides the flexible interface for browsing, searching, grouping and reporting the information. Local version of database manager and required data files are distributed on a compact disk. Besides, there is a network version of the software available on Internet. The network version implies the original mechanism, which is useful for the permanent online extension of the data scope. PMID:11769119

  2. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  3. Optimizing Spatial Databases

    Directory of Open Access Journals (Sweden)

    Anda VELICANU

    2010-01-01

    Full Text Available This paper describes the best way to improve the optimization of spatial databases: through spatial indexes. The most commune and utilized spatial indexes are R-tree and Quadtree and they are presented, analyzed and compared in this paper. Also there are given a few examples of queries that run in Oracle Spatial and are being supported by an R-tree spatial index. Spatial databases offer special features that can be very helpful when needing to represent such data. But in terms of storage and time costs, spatial data can require a lot of resources. This is why optimizing the database is one of the most important aspects when working with large volumes of data.

  4. Additive Pattern Database Heuristics

    CERN Document Server

    Felner, A; Korf, R E; 10.1613/jair.1480

    2011-01-01

    We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases, which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously, we showed how to statically partition the sliding-tile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods a...

  5. The PROSITE database.

    Science.gov (United States)

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/. PMID:16381852

  6. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  7. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  8. Community-Built Databases

    CERN Document Server

    Pardede, Eric

    2011-01-01

    Wikipedia, Flickr, You Tube, Facebook, LinkedIn are all examples of large community-built databases, although with quite diverse purposes and collaboration patterns. Their usage and dissemination will further grow introducing e.g. new semantics, personalization, or interactive media. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems. Pardede delivers the first comprehensive rese

  9. The CHIANTI atomic database

    CERN Document Server

    Young, Peter R; Landi, Enrico; Del Zanna, Giulio; Mason, Helen

    2015-01-01

    The CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. The database has continued to be updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  10. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each...... having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...

  11. Pressen, personoplysninger og databaser

    DEFF Research Database (Denmark)

    Schaumburg-Müller, Sten

    2006-01-01

    Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven......Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven...

  12. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus;

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  13. LogiQL a query language for smart databases

    CERN Document Server

    Halpin, Terry

    2014-01-01

    LogiQL is a new state-of-the-art programming language based on Datalog. It can be used to build applications that combine transactional, analytical, graph, probabilistic, and mathematical programming. LogiQL makes it possible to build hybrid applications that previously required multiple programming languages and databases. In this first book to cover LogiQL, the authors explain how to design, implement, and query deductive databases using this new programming language. LogiQL's declarative approach enables complex data structures and business rules to be simply specified and then automaticall

  14. Long-term perspective underscores need for stronger near-term policies on climate change

    Science.gov (United States)

    Marcott, S. A.; Shakun, J. D.; Clark, P. U.; Mix, A. C.; Pierrehumbert, R.; Goldner, A. P.

    2014-12-01

    Despite scientific consensus that substantial anthropogenic climate change will occur during the 21st century and beyond, the social, economic and political will to address this global challenge remains mired in uncertainty and indecisiveness. One contributor to this situation may be that scientific findings are often couched in technical detail focusing on near-term changes and uncertainties and often lack a relatable long-term context. We argue that viewing near-term changes from a long-term perspective provides a clear demonstration that policy decisions made in the next few decades will affect the Earth's climate, and with it our socio-economic well-being, for the next ten millennia or more. To provide a broader perspective, we present a graphical representation of Earth's long-term climate history that clearly identifies the connection between near-term policy options and the geological scale of future climate change. This long view is based on a combination of recently developed global proxy temperature reconstructions of the last 20,000 years and model projections of surface temperature for the next 10,000 years. Our synthesis places the 20th and 21st centuries, when most emissions are likely to occur, into the context of the last twenty millennia over which time the last Ice Age ended and human civilization developed, and the next ten millennia, over which time the projected impacts will occur. This long-term perspective raises important questions about the most effective adaptation and mitigation policies. For example, although some consider it economically viable to raise seawalls and dikes in response to 21st century sea level change, such a strategy does not account for the need for continuously building much higher defenses in the 22nd century and beyond. Likewise, avoiding tipping points in the climate system in the short term does not necessarily imply that such thresholds will not still be crossed in the more distant future as slower components

  15. DataBase on demand

    CERN Document Server

    Aparicio, Ruben Gaspar; Coterillo Coz, I

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  16. DataBase on Demand

    Science.gov (United States)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  17. DataBase on Demand

    International Nuclear Information System (INIS)

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  18. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  19. Oracle TimesTen in-memory Database Integration

    OpenAIRE

    Žitný, Jakub; Potocký, Miroslav

    2014-01-01

    Project Specification The objective is to build an rpm/script/puppet module that will easily deploy TimesTen in-memory database on existing server/cluster. Create script configuring TimesTen in-memory database for usage with specific database/RAC and creating step-by-step document (Twiki+Snow KB) on how to get required data cached in a simple way. Ultimate outcome will be to have a new service to deploy TT caching easily on any puppetized DB server. Abstract TimesTen is in-memory...

  20. A Peer-to-Peer Database Management System

    OpenAIRE

    Roshelova, Albena

    2004-01-01

    Peer-to-Peer Database Management Systems (PDBMS) are still in the beginning of their evolution. They rise up p2p technology to exploit the power of available distributed database management technologies. The proposed PDBMS will be completely autonomous and any notions of centralization as central server or creating a cost-based global schema will be absent. In this paper a number of potential research issues in the overlap between database and p2p systems is identified, and a vision for build...

  1. Database design: Community discussion board

    OpenAIRE

    Klepetko, Radim

    2009-01-01

    The goal of this thesis is designing a database for discussion board application, which will be able to provide classic discussion board functionality and web 2.0 features in addition. The emphasis lies on a precise description of the application requirements, which are used afterwards to design an optimal database model independent from technological implementations (chosen database system). In the end of my thesis the database design is tested using MySQL database system.

  2. The Database State Machine Approach

    OpenAIRE

    Pedone, Fernando; Guerraoui, Rachid; Schiper, Andre

    1999-01-01

    Database replication protocols have historically been built on top of distributed database systems, and have consequently been designed and implemented using distributed transactional mechanisms, such as atomic commitment. We present the Database State Machine approach, a new way to deal with database replication in a cluster of servers. This approach relies on a powerful atomic broadcast primitive to propagate transactions between database servers, and alleviates the need for atomic comm...

  3. GRAD: On Graph Database Modeling

    OpenAIRE

    Ghrab, Amine; Romero, Oscar; Skhiri, Sabri; Vaisman, Alejandro; Zimányi, Esteban

    2016-01-01

    Graph databases have emerged as the fundamental technology underpinning trendy application domains where traditional databases are not well-equipped to handle complex graph data. However, current graph databases support basic graph structures and integrity constraints with no standard algebra. In this paper, we introduce GRAD, a native and generic graph database model. GRAD goes beyond traditional graph database models, which support simple graph structures and constraints. Instead, GRAD pres...

  4. Clinical databases in physical therapy.

    OpenAIRE

    Swinkels, I.C.S.; Ende, C.H.M. van den; Bakker, D.; Wees, Ph.J van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists record data on their patients and treatments and to investigate the basic aspects, data sets, output, management, and data quality of the databases. Identification of the databases was performed by ...

  5. Building Procurement

    DEFF Research Database (Denmark)

    Andersson, Niclas

    2007-01-01

    ‘The procurement of construction work is complex, and a successful outcome frequently elusive’. With this opening phrase of the book, the authors take on the challenging job of explaining the complexity of building procurement. Even though building procurement systems are, and will remain, complex...... despite this excellent book, the knowledge, expertise, well-articulated argument and collection of recent research efforts that are provided by the three authors will help to make project success less elusive. The book constitutes a thorough and comprehensive investigation of building procurement, which......, which gives the book a challenging contribution to the existing body of knowledge....

  6. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  7. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.; Liu, K.; Kolkman, M.; Klarenberg, P.; Slooten, van F.; Ades, Y.; Slooten, van C.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  8. Mathematics & database (open) access

    OpenAIRE

    Guillopé, Laurent

    2003-01-01

    The textual version of this presentation at the Conference "Open Access to Scientific and Technical Information: State of the Art and Future Trends" was published with the title 'Mathematics and databases: Open Access' in "Information Services and Use", vol. 23 (2003), issue 2-3, p. 127-131.

  9. Hydrocarbon Spectral Database

    Science.gov (United States)

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  10. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  11. LHCb distributed conditions database

    Science.gov (United States)

    Clemencic, M.

    2008-07-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here.

  12. Databases and data mining

    Science.gov (United States)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  13. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    Directory of Open Access Journals (Sweden)

    Guettler Daniel

    2009-10-01

    Full Text Available Abstract Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the

  14. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  15. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Update History of This Database Date Update contents 2014/05/07 The contact informatio...n is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Database... English archive site is opened. 2011/04/04 Trypanosomes Database ( htt...p://www.tanpaku.org/tdb/ ) is opened. Joomla SEF URLs by Artio About This Database Database Description Down

  16. Reactor building

    International Nuclear Information System (INIS)

    The present invention concerns a structure of ABWR-type reactor buildings, which can increase the capacity of a spent fuel storage area at a low cost and improved earthquake proofness. In the reactor building, the floor of a spent fuel pool is made flat, and a depth of the pool water satisfying requirement for shielding is ensured. In addition, a depth of pool water is also maintained for a equipment provisionally storing pool for storing spent fuels, and a capacity for a spent fuel storage area is increased by utilizing surplus space of the equipment provisionally storing pool. Since the flattened floor of the spent fuel pool is flushed with the floor of the equipment provisionally storing pool, transfer of horizontal loads applied to the building upon occurrence of earthquakes is made smooth, to improve earthquake proofness of the building. (T.M.)

  17. Building Languages

    Science.gov (United States)

    ... family's native language) is taught as the child's second language through reading, writing, speech, and use of residual ... that parents can use to help their child learn language. There are many types of building blocks, and ...

  18. MammoGrid: a mammography database

    CERN Multimedia

    2002-01-01

    What would be the advantages if physicians around the world could gain access to a unique mammography database? The answer may come from MammoGrid, a three-year project under the Fifth Framework Programme of the EC. Led by CERN, MammoGrid involves the UK (the Universities of Oxford, Cambridge and the West of England, Bristol, plus the company Mirada Solutions of Oxford), and Italy (the Universities of Pisa and Sassari and the Hospitals in Udine and Torino). The aim of the project is, in light of emerging GRID technology, to develop a Europe-wide database of mammograms. The database will be used to investigate a set of important healthcare applications as well as the potential of the GRID to enable healthcare professionals throughout the EU to work together effectively. The contributions of the partners include building the GRID-database infrastructure, developing image processing and Computer Aided Detection techniques, and making the clinical evaluation. The first project meeting took place at CERN in Sept...

  19. What is a lexicographical database?

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Skovgård Nielsen, Jesper

    2013-01-01

    50 years ago, no lexicographer used a database in the work process. Today, almost all dictionary projects incorporate databases. In our opinion, the optimal lexicographical database should be planned in cooperation between a lexicographer and a database specialist in each specific lexicographic...... project. Such cooperation will reach the highest level of success if the lexicographer has at least a basic knowledge of the topic presented in this paper: What is a database? This type of knowledge is also needed when the lexicographer describes an ongoing or a finished project. In this article, we...... provide the description of this type of cooperation, using the most important theoretical terms relevant in the planning of a database. It will be made clear that a lexicographical database is like any other database. The only difference is that an optimal lexicographical database is constructed to fulfil...

  20. CASE STORAGE BASED ON RELATIONAL DATABASE

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focused on the integration of case base and relational database management system (RDBMS). The organizational and commercial impact will be far greater if the case-based reasoning (CBR) system is integrated with main stream information system, which is exemplified by RDBMS. The scalability, security and robustness provided by a commercial RDBMS facilitate the CBR system to manage the case base.The virtual table in relational database (RDB) is important for CBR systems to implement the flexibility of case template. It was discussed how to implement a flexible and succinct case template, and a mapping model between case template and RDB was proposed. The key idea is to build the case as the virtual view of underlying data.

  1. Open geochemical database

    Science.gov (United States)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are

  2. Building and Maintaining Halls of Fame Over a Database

    OpenAIRE

    Alvanaki, F.; Michel, S; Stupar, A.

    2012-01-01

    Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities---persons, companies, products, countries etc. Beyond their practical use as static rankings, changes to them are particularly interesting---for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated w...

  3. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  4. An XCT image database system

    International Nuclear Information System (INIS)

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  5. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...earch and download Downlaod via FTP Joomla SEF URLs by Artio About This Database Database Description Download License Update History

  6. The magnet database system

    International Nuclear Information System (INIS)

    The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots

  7. Database on aircraft accidents

    International Nuclear Information System (INIS)

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  8. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  9. Physical database design for an object-oriented database system

    OpenAIRE

    Scholl, Marc H.

    1994-01-01

    Object-oriented database systems typically offer a variety of structuring capabilities to model complex objects. This flexibility, together with type (or class) hierarchies and computed "attributes"§ (methods), poses a high demand on the physical design of object-oriented databases. Similar to traditional databases, it is hardly ever true that the conceptual structure of the database is also a good, that is, effcient, internal one. Rather, data representing the conceptual objects may be stru...

  10. MMI Face Database

    OpenAIRE

    Maat, L.M.; Sondak, R.C.; Valstar, M.F.; Pantic, M.; Gaia, P.

    2005-01-01

    The automatic recognition of human facial expressions is an interesting research area in AI with a growing number of projects and researchers. In spite of repeated references to the need for a reference set of images that could provide a basis for benchmarking various techniques in automatic facial expression analysis, a readily accessible and complete enough database of face images does not exist yet. This lack represented our main incentive to develop a web-based, easily accessible, and eas...

  11. Formal aspects in databases

    International Nuclear Information System (INIS)

    From the beginning of the relational data models special attention has been paid to the theory of relations through the concepts of decomposition and dependency constraints. The initial goal of these works was devoted to the scheme design process. Most of the results are used in this area but serve as a basis for improvements of the model in several directions: incomplete information, universal relations, deductive databases, etc... (orig.)

  12. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  13. Modeling Digital Video Database

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.

  14. SEDA (SEed DAtabase)

    Czech Academy of Sciences Publication Activity Database

    Šerá, Božena

    Praha : Botanická zahrada hl. m. Prahy, 2005 - (Sekerka, P.), s. 64-65 ISBN 80-903697-0-7. [Introdukce a genetické zdroje rostlin. Botanické zahrady v novém ticíciletí. Praha (CZ), 05.09.2005] R&D Projects: GA MŠk(CZ) 1P05OC049 Institutional research plan: CEZ:AV0Z60870520 Keywords : database, seed, diaspore, fruit, Subject RIV: EH - Ecology, Behaviour

  15. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  16. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  17. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  18. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  19. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  20. NNDC database migration project

    International Nuclear Information System (INIS)

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative Iγ) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have

  1. Human cancer databases (review).

    Science.gov (United States)

    Pavlopoulou, Athanasia; Spandidos, Demetrios A; Michalopoulos, Ioannis

    2015-01-01

    Cancer is one of the four major non‑communicable diseases (NCD), responsible for ~14.6% of all human deaths. Currently, there are >100 different known types of cancer and >500 genes involved in cancer. Ongoing research efforts have been focused on cancer etiology and therapy. As a result, there is an exponential growth of cancer‑associated data from diverse resources, such as scientific publications, genome‑wide association studies, gene expression experiments, gene‑gene or protein‑protein interaction data, enzymatic assays, epigenomics, immunomics and cytogenetics, stored in relevant repositories. These data are complex and heterogeneous, ranging from unprocessed, unstructured data in the form of raw sequences and polymorphisms to well‑annotated, structured data. Consequently, the storage, mining, retrieval and analysis of these data in an efficient and meaningful manner pose a major challenge to biomedical investigators. In the current review, we present the central, publicly accessible databases that contain data pertinent to cancer, the resources available for delivering and analyzing information from these databases, as well as databases dedicated to specific types of cancer. Examples for this wealth of cancer‑related information and bioinformatic tools have also been provided. PMID:25369839

  2. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  3. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  4. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  5. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  6. Reshaping Smart Businesses with Cloud Database Solutions

    OpenAIRE

    Bogdan NEDELCU; Andreea Maria IONESCU; Ionescu, Ana Maria; Alexandru George VASILE

    2015-01-01

    The aim of this article is to show the importance of Big Data and its growing influence on companies. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. In this big data era, there is a fiercely competition between the companies and the technologies they use when building their strategies. There are almost no boundaries when it comes to the possibilities and facilities some databases can offer. However, the mos...

  7. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  8. Building Procurement

    DEFF Research Database (Denmark)

    Andersson, Niclas

    2007-01-01

    ‘The procurement of construction work is complex, and a successful outcome frequently elusive’. With this opening phrase of the book, the authors take on the challenging job of explaining the complexity of building procurement. Even though building procurement systems are, and will remain, complex...... despite this excellent book, the knowledge, expertise, well-articulated argument and collection of recent research efforts that are provided by the three authors will help to make project success less elusive. The book constitutes a thorough and comprehensive investigation of building procurement, which...... evolves from a simple establishment of a contractual relationship to a central and strategic part of construction. The authors relate to cultural, ethical and social and behavioural sciences as the fundamental basis for analysis and understanding of the complexity and dynamics of the procurement system...

  9. Competence Building

    DEFF Research Database (Denmark)

    Borrás, Susana; Edquist, Charles

    The main question that guides this paper is how governments are focusing (and must focus) on competence building (education and training) when designing and implementing innovation policies. With this approach, the paper aims at filling the gap between the existing literature on competences on the...... one hand, and the real world of innovation policy-making on the other, typically not speaking to each other. With this purpose in mind, this paper discusses the role of competences and competence-building in the innovation process from a perspective of innovation systems; it examines how governments...... and public agencies in different countries and different times have actually approached the issue of building, maintaining and using competences in their innovation systems; it examines what are the critical and most important issues at stake from the point of view of innovation policy, looking...

  10. Towards a Portuguese database of food microbiological occurrence

    OpenAIRE

    Viegas, Silvia; Machado, Claudia; Dantas, M.Ascenção; Oliveira, Luísa

    2011-01-01

    Aims: To expand the Portuguese Food Information Resource Programme (PortFIR) by building the Portuguese Food Microbiological Information Network (RPIMA) including users, stakeholders, food microbiological data producers that will provide data and information from research, monitoring, epidemiological investigation and disease surveillance. The integration of food data in a national database will improve foodborne risk management. Methods and results Potential members were identified and...

  11. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  12. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  13. Shark Mark Recapture Database (MRDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Shark Mark Recapture Database is a Cooperative Research Program database system used to keep multispecies mark-recapture information in a common format for...

  14. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  15. Building Bridges

    DEFF Research Database (Denmark)

    The report Building Bridges adresses the questions why, how and for whom academic audience research has public value, from the different points of view of the four working groups in the COST Action IS0906 Transforming Audiences, Transforming Societies – “New Media Genres, Media Literacy and Trust...... in the Media”, “Audience Interactivity and Participation”, “The Role of Media and ICT Use for Evolving Social Relationships” and “Audience Transformations and Social Integration”. Building Bridges is the result of an ongoing dialogue between the Action and non-academic stakeholders in the field of audience...

  16. Database Systems - Present and Future

    OpenAIRE

    Ion LUNGU; Manole VELICANU; Iuliana BOTHA

    2009-01-01

    The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summariz...

  17. Hydrogen Leak Detection Sensor Database

    Science.gov (United States)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  18. Techniques for multiple database integration

    OpenAIRE

    Whitaker, Barron D

    1997-01-01

    Approved for public release; distribution is unlimited There are several graphic client/server application development tools which can be used to easily develop powerful relational database applications. However, they do not provide a direct means of performing queries which require relational joins across multiple database boundaries. This thesis studies ways to access multiple databases. Specifically, it examines how a 'cross-database join' can be performed. A case study of techniques us...

  19. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  20. Information technology of database integration

    OpenAIRE

    Черненко, Николай Владимирович

    2012-01-01

    The article considers the problem that has developed in the course of decentralized organization automation, and its possible solutions. To eliminate the defects it was suggested to integrate the databases of heterogeneous information systems into a single database of organization. A study of existing methodologies, approaches and practices of the integration of databases took place. The article suggests the formal description of the information technology of database integration, which allow...

  1. DRAM BASED PARAMETER DATABASE OPTIMIZATION

    OpenAIRE

    Marcinkevicius, Tadas

    2012-01-01

    This thesis suggests an improved parameter database implementation for one of Ericsson products. The parameter database is used during the initialization of the system as well as during the later operation. The database size is constantly growing because the parameter database is intended to be used with different hardware configurations. When a new technology platform is released, multiple revisions with additional features and functionalities are later created, resulting in introduction of ...

  2. Content independence in multimedia databases

    OpenAIRE

    Vries, de, P.M.

    2001-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. The notions of content abstraction and content independence are introduced, which clearly expose the unique challenges (for database architecture) of applications in...

  3. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    Science.gov (United States)

    Düsterhus, A.; Rovere, A.; Carlson, A. E.; Barlow, N. L. M.; Bradwell, T.; Dutton, A.; Gehrels, R.; Hibbert, F. D.; Hijma, M. P.; Horton, B. P.; Klemann, V.; Kopp, R. E.; Sivan, D.; Tarasov, L.; Törnqvist, T. E.

    2015-06-01

    Sea-level and ice-sheet databases are essential tools for evaluating palaeoclimatic changes. However, database creation poses considerable challenges and problems related to the composition and needs of scientific communities creating raw data, the compiliation of the database, and finally using it. There are also issues with data standardisation and database infrastructure, which should make the database easy to understand and use with different layers of complexity. Other challenges are correctly assigning credit to original authors, and creation of databases that are centralised and maintained in long-term digital archives. Here, we build on the experience of the PALeo constraints on SEA level rise (PALSEA) community by outlining strategies for designing a self-consistent and standardised database of changes in sea level and ice sheets, identifying key points that need attention when undertaking the task of database creation.

  4. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    Directory of Open Access Journals (Sweden)

    A. Düsterhus

    2015-06-01

    Full Text Available Sea-level and ice-sheet databases are essential tools for evaluating palaeoclimatic changes. However, database creation poses considerable challenges and problems related to the composition and needs of scientific communities creating raw data, the compiliation of the database, and finally using it. There are also issues with data standardisation and database infrastructure, which should make the database easy to understand and use with different layers of complexity. Other challenges are correctly assigning credit to original authors, and creation of databases that are centralised and maintained in long-term digital archives. Here, we build on the experience of the PALeo constraints on SEA level rise (PALSEA community by outlining strategies for designing a self-consistent and standardised database of changes in sea level and ice sheets, identifying key points that need attention when undertaking the task of database creation.

  5. Databases in Indian biology: The state of the art and prospects

    Digital Repository Service at National Institute of Oceanography (India)

    Chavan, V.S.; Chandramohan, D.

    the Indian biology and biotechnology databses and their relation to international databases on the subject. It highlights their limitations and throws more light on their potential for subject experts and information managers in the country to build...

  6. On Simplifying Features in OpenStreetMap database

    Science.gov (United States)

    Qian, Xinlin; Tao, Kunwang; Wang, Liang

    2015-04-01

    Currently the visualization of OpenStreetMap data is using a tile server which stores map tiles that have been rendered from vector data in advance. However, tiled map are short of functionalities such as data editing and customized styling. To enable these advanced functionality, Client-side processing and rendering of geospatial data is needed. Considering the voluminous size of the OpenStreetMap data, simply sending region queries results of OSM database to client is prohibitive. To make the OSM data retrieved from database adapted for client receiving and rendering, It must be filtered and simplified at server-side to limit its volume. We propose a database extension for OSM database to make it possible to simplifying geospatial objects such as ways and relations during data queries. Several auxiliary tables and PL/pgSQL functions are presented to make the geospatial features can be simplified by omitting unimportant vertices. There are five components in the database extension: Vertices weight computation by polyline and polygon simplification algorithm, Vertices weight storage in auxiliary tables. filtering and selecting of vertices using specific threshold value during spatial queries, assembling of simplified geospatial objects using filtered vertices, vertices weight updating after geospatial objects editing. The database extension is implemented on an OSM APIDB using PL/pgSQL. The database contains a subset of OSM database. The experimental database contains geographic data of United Kingdom which is about 100 million vertices and roughly occupy 100GB disk. JOSM are used to retrieve the data from the database using a revised data accessing API and render the geospatial objects in real-time. When serving simplified data to client, The database allows user to set the bound of the error of simplification or the bound of responding time in each data query. Experimental results show the effectiveness and efficiency of the proposed methods in building a

  7. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us DMPD Database... Description General information of database Database name DMPD Alternative name Dynamic Macrophage Pathway CSML Databas...108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database classification Metabolic and Signaling P...malia Taxonomy ID: 40674 Database description DMPD collects pathway models of transcriptional regulation and... signal transduction in CSML format for dymamic simulation based on the curation of descriptions about LPS a

  8. The Database Query Support Processor (QSP)

    Science.gov (United States)

    1993-01-01

    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide

  9. Design and Performance of a Xenobiotic Metabolism Database Manager for Building Metabolic Pathway Databases

    Science.gov (United States)

    A major challenge for scientists and regulators is accounting for the metabolic activation of chemicals that may lead to increased toxicity. Reliable forecasting of chemical metabolism is a critical factor in estimating a chemical’s toxic potential. Research is underway to develo...

  10. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  11. Sustainable Buildings

    DEFF Research Database (Denmark)

    Tommerup, Henrik M.; Elle, Morten

    The scientific community agrees that: all countries must drastically and rapidly reduce their CO2 emissions and that energy efficient houses play a decisive role in this. The general attitude at the workshop on Sustainable Buildings was that we face large and serious climate change problems that...

  12. Databases for Data Mining

    OpenAIRE

    LANGOF, LADO

    2015-01-01

    This work is about looking for synergies between data mining tools and databa\\-se management systems (DBMS). Imagine a situation where we need to solve an analytical problem using data that are too large to be processed solely inside the main physical memory and at the same time too small to put data warehouse or distributed analytical system in place. The target area is therefore a single personal computer that is used to solve data mining problems. We are looking for tools that allows us to...

  13. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  14. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  15. Nuclear database management systems

    International Nuclear Information System (INIS)

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  16. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  17. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  18. Caching in Multidimensional Databases

    OpenAIRE

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at lea...

  19. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2014-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  20. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2013-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  1. Secrets of the Oracle Database

    CERN Document Server

    Debes, Norbert

    2009-01-01

    Secrets of the Oracle Database is the definitive guide to undocumented and partially documented features of the Oracle database server. Covering useful but little-known features from Oracle9i Database through Oracle Database 11g, this book will improve your efficiency as an Oracle database administrator or developer. Norbert Debes shines the light of day on features that help you master more difficult administrative, tuning, and troubleshooting tasks than you ever thought possible. Finally, in one place, you have at your fingertips knowledge that previously had to be acquired through years of

  2. Federated Spatial Databases and Interoperability

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given

  3. Genomic Databases for Crop Improvement

    Directory of Open Access Journals (Sweden)

    David Edwards

    2012-03-01

    Full Text Available Genomics is playing an increasing role in plant breeding and this is accelerating with the rapid advances in genome technology. Translating the vast abundance of data being produced by genome technologies requires the development of custom bioinformatics tools and advanced databases. These range from large generic databases which hold specific data types for a broad range of species, to carefully integrated and curated databases which act as a resource for the improvement of specific crops. In this review, we outline some of the features of plant genome databases, identify specific resources for the improvement of individual crops and comment on the potential future direction of crop genome databases.

  4. Glass Stronger than Steel

    Science.gov (United States)

    Yarris, Lynn

    2011-03-28

    A new type of damage-tolerant metallic glass, demonstrating a strength and toughness beyond that of steel or any other known material, has been developed and tested by a collaboration of researchers from Berkeley Lab and Caltech.

  5. Working Stronger Together

    Science.gov (United States)

    DeLong, Douglas J.

    2008-01-01

    Having professional learning communities in place enables teachers and staff members to provide targeted assistance to students who need it. This article discusses Chardon (Ohio) High School's professional learning community program which provides time and support for students and teachers. Through the program, the teachers meet every week during…

  6. Which global stock indices trigger stronger contagion risk in the Vietnamese stock market? Evidence using a bivariate analysis

    Directory of Open Access Journals (Sweden)

    Wang Kuan-Min

    2013-01-01

    Full Text Available This paper extends recent investigations into risk contagion effects on stock markets to the Vietnamese stock market. Daily data spanning October 9, 2006 to May 3, 2012 are sourced to empirically validate the contagion effects between stock markets in Vietnam, and China, Japan, Singapore, and the US. To facilitate the validation of contagion effects with market-related coefficients, this paper constructs a bivariate EGARCH model of dynamic conditional correlation coefficients. Using the correlation contagion test and Dungey et al.’s (2005 contagion test, we find contagion effects between the Vietnamese and four other stock markets, namely Japan, Singapore, China, and the US. Second, we show that the Japanese stock market causes stronger contagion risk in the Vietnamese stock market compared to the stock markets of China, Singapore, and the US. Finally, we show that the Chinese and US stock markets cause weaker contagion effects in the Vietnamese stock market because of stronger interdependence effects between the former two markets.

  7. Acceptance of evidence-supported hypotheses generates a stronger signal from an underlying functionally-connected network.

    Science.gov (United States)

    Whitman, J C; Takane, Y; Cheung, T P L; Moiseev, A; Ribary, U; Ward, L M; Woodward, T S

    2016-02-15

    Choosing one's preferred hypothesis requires multiple brain regions to work in concert as a functionally connected network. We predicted that a stronger network signal would underlie cognitive coherence between a hypothesis and the available evidence. In order to identify such functionally connected networks in magnetoencephalography (MEG) data, we first localized the generators of changes in oscillatory power within three frequency bands, namely alpha (7-13Hz), beta (18-24Hz), and theta (3-7Hz), with a spatial resolution of 5mm and temporal resolution of 50ms. We then used principal component analysis (PCA) to identify functionally connected networks reflecting co-varying post-stimulus changes in power. As predicted, PCA revealed a functionally connected network with a stronger signal when the evidence supported accepting the hypothesis being judged. This difference was driven by beta-band power decreases in the left dorsolateral prefrontal cortex (DLPFC), ventromedial prefrontal cortex (VMPFC), posterior cingulate cortex (PCC), and midline occipital cortex. PMID:26702776

  8. An improved piezoelectric harvester available in scavenging-energy from the operating environment with either weaker or stronger vibration levels

    Institute of Scientific and Technical Information of China (English)

    XUE Huan; HU HongPing; HU YuanTai; CHEN XueDong

    2009-01-01

    An improved harvester available in scavenging energy from the operating environment with either weaker or stronger vibration levels is studied. To ensure the optimal harvester performance, a Cuk dc-dc converter is employed into the modulating circuit. This paper reports how this harvester scav-enges maximal energy from varying-level vibrations and store energy into an electrochemical battery. Dependence of the duty cycle upon the external vibration level is calculated, and it is found that: 1) for weaker vibrations, the charging current into the battery is smaller than the allowable current, and thus all the optimal output power of the harvesting structure can be absorbed by the battery. In this case, the duty cycle should be fixed at 1.86%; 2) for stronger external forcing, the allowable charging current of the battery is smaller than the optimal harvested current. This indicates that just a portion of the scav-enged energy can be accepted by the battery. Thus, the duty cycle should be decreased gradually with the increase of the vibration level. Finally the energy transmission process and the roles of each electronic element are analyzed. It is shown that a Cuk converter can greatly raise the efficiency of such a harvester, particularly when subjected to a weaker ambient vibration.

  9. KIR2DL2/2DL3-E35 alleles are functionally stronger than -Q35 alleles

    Science.gov (United States)

    Bari, Rafijul; Thapa, Rajoo; Bao, Ju; Li, Ying; Zheng, Jie; Leung, Wing

    2016-03-01

    KIR2DL2 and KIR2DL3 segregate as alleles of a single locus in the centromeric motif of the killer cell immunoglobulin-like receptor (KIR) gene family. Although KIR2DL2/L3 polymorphism is known to be associated with many human diseases and is an important factor for donor selection in allogeneic hematopoietic stem cell transplantation, the molecular determinant of functional diversity among various alleles is unclear. In this study we found that KIR2DL2/L3 with glutamic acid at position 35 (E35) are functionally stronger than those with glutamine at the same position (Q35). Cytotoxicity assay showed that NK cells from HLA-C1 positive donors with KIR2DL2/L3-E35 could kill more target cells lacking their ligands than NK cells with the weaker -Q35 alleles, indicating better licensing of KIR2DL2/L3+ NK cells with the stronger alleles. Molecular modeling analysis reveals that the glutamic acid, which is negatively charged, interacts with positively charged histidine located at position 55, thereby stabilizing KIR2DL2/L3 dimer and reducing entropy loss when KIR2DL2/3 binds to HLA-C ligand. The results of this study will be important for future studies of KIR2DL2/L3-associated diseases as well as for donor selection in allogeneic stem cell transplantation.

  10. An improved piezoelectric harvester available in scavenging-energy from the operating environment with either weaker or stronger vibration levels

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    An improved harvester available in scavenging energy from the operating environment with either weaker or stronger vibration levels is studied. To ensure the optimal harvester performance, a Cuk dc-dc converter is employed into the modulating circuit. This paper reports how this harvester scav- enges maximal energy from varying-level vibrations and store energy into an electrochemical battery. Dependence of the duty cycle upon the external vibration level is calculated, and it is found that: 1) for weaker vibrations, the charging current into the battery is smaller than the allowable current, and thus all the optimal output power of the harvesting structure can be absorbed by the battery. In this case, the duty cycle should be fixed at 1.86%; 2) for stronger external forcing, the allowable charging current of the battery is smaller than the optimal harvested current. This indicates that just a portion of the sca- venged energy can be accepted by the battery. Thus, the duty cycle should be decreased gradually with the increase of the vibration level. Finally the energy transmission process and the roles of each elec- tronic element are analyzed. It is shown that a Cuk converter can greatly raise the efficiency of such a harvester, particularly when subjected to a weaker ambient vibration.

  11. Indexing, learning and content-based retrieval for special purpose image databases

    OpenAIRE

    Huiskes, Mark; Pauwels, Eric

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by taking a tour along the entire

  12. An approach to the Optimization of menu-based Natural Language Interfaces to Databases

    OpenAIRE

    Fiaz Majeed; Shoaib, M.; Fasiha Ashraf

    2011-01-01

    Natural language interfaces to databases (NLIDB) facilitate the user to state query to database in natural language. NLIDB then interprets the natural language query into Structured Query Language (SQL) to perform action on target database. Menubased NLIDB provides restricted set of elements on screen that are utilized to build natural language query. The latest menubased NLIDB's use WYSIWYM interfaces that focus on automatic formation of popup menus relevant to typed word on editor. The auto...

  13. IPD: the Immuno Polymorphism Database.

    Science.gov (United States)

    Robinson, James; Marsh, Steven G E

    2007-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer cell immunoglobulin-like receptors (KIRs); IPD-MHC, a database of sequences of the major histocompatibility complex (MHC) of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTAB, which provides access to the European Searchable Tumour Cell Line Database, a cell bank of immunologically characterized melanoma cell lines. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. Those sections with similar data, such as IPD-KIR and IPD-MHC, share the same database structure. PMID:18449992

  14. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... database and visualization systems. The thesis describes other techniques that extend the functionality of an observer aware database to support the extraction of the N most visible objects. This functionality is particularly useful if the number of newly visible objects is still too large. The thesis...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer...

  15. SHORT SURVEY ON GRAPHICAL DATABASE

    Directory of Open Access Journals (Sweden)

    Harsha R Vyavahare

    2015-08-01

    Full Text Available This paper explores the features of graph databases and data models. The popularity towards work with graph models and datasets has been increased in the recent decades .Graph database has a number of advantage over the relational database. This paper take a short review on the graph and hyper graph concepts from mathematics so that graph so that we can understand the existing difficulties in the implantation of graph model. From the Past few decades saw hundreds of research contributions their vast research in the DBS field with graph database. However, the research on the existence of general purpose DBS managements and mining that suits for variety of applications is still very much active. The review is done based on the Application of graph model techniques in the database within the framework of graph based approaches with the aim of implementation of different graphical database and tabular database

  16. Designing a Multi-Petabyte Database for LSST

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei; Abdulla, Ghaleb; Szalay, Alex; Nieto-Santisteban, Maria; Thakar, Ani; Gray, Jim; /SLAC

    2007-01-10

    The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are being evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.

  17. A Components Database Design and Implementation for Accelerators and Detectors.

    Science.gov (United States)

    Chan, A.; Meyer, S.

    1997-05-01

    Many accelerator and detector systems being fabricated for the PEP-II Accelerator and BaBar Detector needed configuration control and calibration measurements tracked for their components. Instead of building a database for each distinct system, a Components Database was designed and implemented that can encompass any type of component and any type of measurement. In this paper we describe this database design which is especially suited for the engineering and fabrication processes of the accelerator and detector environments where there are thousands of unique component types. We give examples of information stored in the Components Database, which includes accelerator configuration, calibration measurements, fabrication history, design specifications, inventory, etc. The World Wide Web interface is used to access the data, and templates are available for international collaborations to collect data off-line.

  18. Caching in Multidimensional Databases

    CERN Document Server

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined ...

  19. Commercial building energy use in six cities in Southern China

    International Nuclear Information System (INIS)

    With China’s continuing economic growth, the percentage of government offices and large commercial buildings has increased tremendously; thus, the impact of their energy usage has grown drastically. In this survey, a database with more than 400 buildings was created and analyzed. We researched energy consumption by region, building type, building size and vintage, and we determined the total energy use and performed end use breakdowns of typical buildings in six cities in southern China. The statistical analysis shows that, on average, the annual building electricity use ranged from 50 to 100 kW h/m2 for office buildings, 120 to 250 kW h/m2 for shopping malls and hotels, and below 40 kW h/m2 for education facilities. Building size has no direct correlation with building energy intensity. Although modern commercial buildings built in the 1990s and 2000s did not use more energy on average than buildings built previously, the highest electricity intensive modern buildings used much more energy than those built prior to 1990. Commercial buildings in China used less energy than buildings in equivalent weather locations in the US and about the same amount of energy as buildings in India. However, commercial buildings in China provide comparatively less thermal comfort than buildings in comparable US climates. - Highlights: ► The worst modern buildings use more energy than the worst old buildings. ► Government office buildings did not use more energy than private office buildings. ► Commercial buildings in China use less energy than buildings in the US. ► Modern commercial buildings don't use more energy than old buildings.

  20. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  1. Administrative building

    OpenAIRE

    Vokatá, Kateřina

    2015-01-01

    The task of my master´s thesis was to work up a project and a check of a bearing steel construction the multi-storey office building with a garage in Brno. The building is composed of five storey office section and two storey of garage. Ground dimension of administrative part is 38,8m x 35m with distance of pillars 7m,6m and 6,4m. The structural height of floor is 3,5m.Garage is designed with dimensions 36m x 24,8m with structural height of floor 3,5m. Distance of pillars is 5,6m, 6,4m and 7,...

  2. Building economics

    DEFF Research Database (Denmark)

    Pedersen, D.O.(red.)

    Publikationen er på engelsk. Den omfatter alle indlæg på det fjerde internationale symposium om byggeøkonomi, der blev arrangeret af SBI for det internationale byggeforskningsråd CIB. De fem bind omhandler: Methods of Economic Evaluation, Design Optimization, Ressource Utilization, The Building...... Market og Economics and Technological Forecasting in Construction. Et indledende bind bringer statusrapporter for de fem forskningsområder, og det sidste bind sammenfatter debatten på symposiet....

  3. Model Building

    OpenAIRE

    Frampton, Paul H.

    1997-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly...

  4. Building Inclusion

    OpenAIRE

    Jeanet Kullberg; Isik Kulu-Glasgow

    2009-01-01

    The social inclusion of immigrants and ethnic minorities is a central issue in many European countries. Governments face challenges in ensuring housing for immigrants, delivering public services, promoting neighbourhood coexistence and addressing residential segregation. The Building Inclusion project, sponsored by the European Commission, enables EU member states to exchange experiences relating to the social inclusion of vulnerable groups. Its special focus is on housing access and housing ...

  5. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  6. Measuring Database Objects Relatedness in Peer-to-Peer Databases

    OpenAIRE

    M. Basel Almourad

    2013-01-01

    Peer-to-Peer database management systems have become an important topic in the last few years. They rise up p2p technology to exploit the power of available distributed database management system technologies. Identifying relationship between different peer schema objects is one of the main activities so semantically relevant peer database management systems can be acquainted and become close in the overlay. In this paper we present our approach that measures the similarity of peer schema obj...

  7. Ageing Management Program Database

    International Nuclear Information System (INIS)

    The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)

  8. The CHIANTI atomic database

    Science.gov (United States)

    Young, P. R.; Dere, K. P.; Landi, E.; Del Zanna, G.; Mason, H. E.

    2016-04-01

    The freely available CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. It contains data and software for modeling optically thin atom and positive ion emission from low density (≲1013 cm-3) plasmas from x-ray to infrared wavelengths. A key feature is that the data are assessed and regularly updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  9. Database Programming Languages

    DEFF Research Database (Denmark)

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...... to present the very best work at the intersection of dataase and programming language research. The proceedings include a paper based on the invited talk by Wenfie Fan and the 16 contributed papers that were selected by at least three members of the program committee. In addition, the program commitee sought...... for their assistance and sound counsel and the organizers of VLDB 2007 for taking care of the local organization of DBPL....

  10. Database design for a kindergarten Pastelka

    OpenAIRE

    Grombíř, Tomáš

    2010-01-01

    This bachelor thesis deals with analysis, creation of database for a kindergarten and installation of the designed database into the database system MySQL. Functionality of the proposed database was verified through an application written in PHP.

  11. The case for stronger regulation of private practitioners to control tuberculosis in low- and middle-income countries.

    Science.gov (United States)

    Mahendradhata, Yodi

    2015-01-01

    Tuberculosis case management practices of private practitioners in low- and middle-income countries are commonly not in compliance with treatment guidelines, thus increasing the risk of drug resistance. National Tuberculosis control programs have long been encouraged to collaborate with private providers to improve compliance, but there is no example yet of a sustained, large scale collaborations with private practitioners in these settings. Regulations have long been realized as a potential response to poor quality care, however there has been a lack of interest from the international actors to invest in stronger regulation of private providers in these countries due to limited evidence and many implementation challenges. Regulatory strategies have now evolved beyond the costly conventional form of command and control. These new strategies need to be tested for addressing the challenge of poor quality care among private providers. Multilateral and bilateral funding agencies committed to tuberculosis control need to invest in facilitating strengthening government's capacity to effectively regulate private providers. PMID:26499482

  12. Emotional reactions to standardized stimuli in women with borderline personality disorder: stronger negative affect, but no differences in reactivity.

    Science.gov (United States)

    Jacob, Gitta A; Hellstern, Kathrin; Ower, Nicole; Pillmann, Mona; Scheel, Corinna N; Rüsch, Nicolas; Lieb, Klaus

    2009-11-01

    Emotional dysregulation is hypothesized to be a core feature of borderline personality disorder (BPD). In this study, we investigated the course of emotions in response to standardized emotion inductions in BPD. A total of 26 female BPD patients, 28 matched healthy control subjects, and 15 female patients with major depressive disorder listened to short stories inducing an angry, joyful, or neutral mood. Before and immediately after each story as well as 3 and 6 minutes later, participants rated their current anger, joy, anxiety, shame, and sadness. All 3 groups showed the same increase and decrease of emotions. However, strong group differences in the general level of all negative emotions occurred. While sadness was stronger both in BPD and major depressive disorder as compared with healthy controls, all other negative emotions were significantly increased in BPD only independent of comorbid depression. Extreme negative affectivity may be a more appropriate description of BPD-related emotional problems than emotional hyperreactivity. PMID:19996718

  13. Strategic Factors Influencing National and Regional Systems of Innovation: A Case of Weaker NSI with Stronger RSI

    Directory of Open Access Journals (Sweden)

    Pir Roshanuddin Shah Rashdi

    2015-04-01

    Full Text Available The issues of relationship between NSI ((National System of Innovation and RSI (Regional System of Innovation are not well reported with innovation policy research. That is, whether the NSI is the system on top of RSI, or the importance of regions make stronger NSIs. Therefore, it raises concern regarding development of strategic relationship between these two. For this, two cases ? Catalonia (Spain and N Ireland (the UK, have been selected based on theoretical sampling. Key economic indicators have been identified and have been quantitatively analyzed. The evidence suggests that strong NSI has positive influence on RSI. In addition to that, the concentration of knowledge and promotion of institutions may be strategically established and then needed resources may be injected to produce high quality human resources. There is, however, need for more comprehensive studies to be conducted in order to validate the results of this research

  14. Is there a stronger graft-versus-leukemia effect using HLA-haploidentical donors compared with HLA-identical siblings?

    Science.gov (United States)

    Ringdén, O; Labopin, M; Ciceri, F; Velardi, A; Bacigalupo, A; Arcese, W; Ghavamzadeh, A; Hamladji, R M; Schmid, C; Nagler, A; Mohty, M

    2016-02-01

    Haploidentical hematopoietic stem cell transplants (HSCTs) are increasingly used, but it is unknown whether they have a stronger graft-versus-leukemia (GVL) effect. We analyzed 10 679 acute leukemia patients who underwent HSCT from an HLA-matched sibling donor (MSD, n=9815) or a haploidentical donor (⩾2 HLA-antigen disparity, n=864) between 2007 and 2012, reported to the European Group for Blood and Marrow Transplantation. In a Cox regression model, acute and chronic graft-versus-host disease (GVHD) was added as time-dependent variables. There was no difference in probability of relapse between recipients of haploidentical and MSD grafts. Factors of importance for relapse after T-cell-replete grafts included remission status at HSCT, Karnofsky score ⩽80, acute GVHD of grade II or higher and chronic GVHD (PGVL effect. PMID:26293645

  15. Stronger constraints for nanometer scale Yukawa-type hypothetical interactions from the new measurement of the Casimir force

    CERN Document Server

    Bordag, M; Klimchitskaya, G L; Mostepanenko, V M

    1999-01-01

    We consider the Casimir force including all important corrections to it for the configuration used in a recent experiment employing an atomic force microscope. We calculate the long-range hypothetical forces due to the exchange of light and massless elementary particles between the atoms constituting the bodies used in the experiment --- a dielectric plate and a sphere both covered by two thin metallic layers. The corrections to these forces caused by the small surface distortions were found to be essential for nanometer Compton wave lengthes of hypothetical particles. New constraints for the constants of Yukawa-type interactions are obtained from the fact that such interactions were not observed within the limits of experimental accuracy. They are stronger up to 140 times in some range than the best constraints known up date. Different possibilities are also discussed to strengthen the obtained constraints in several times without principal changes of the experimental setup.

  16. Application of graph database for analytical tasks

    OpenAIRE

    Günzl, Richard

    2014-01-01

    This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to desi...

  17. Negative Database for Data Security

    CERN Document Server

    Patel, Anup; Eirinaki, Magdalini

    2011-01-01

    Data Security is a major issue in any web-based application. There have been approaches to handle intruders in any system, however, these approaches are not fully trustable; evidently data is not totally protected. Real world databases have information that needs to be securely stored. The approach of generating negative database could help solve such problem. A Negative Database can be defined as a database that contains huge amount of data consisting of counterfeit data along with the real data. Intruders may be able to get access to such databases, but, as they try to extract information, they will retrieve data sets that would include both the actual and the negative data. In this paper we present our approach towards implementing the concept of negative database to help prevent data theft from malicious users and provide efficient data retrieval for all valid users.

  18. Table manipulation in simplicial databases

    CERN Document Server

    Spivak, David I

    2010-01-01

    In \\cite{Spi}, we developed a category of databases in which the schema of a database is represented as a simplicial set. Each simplex corresponds to a table in the database. There, our main concern was to find a categorical formulation of databases; the simplicial nature of the schemas was to some degree unexpected and unexploited. In the present note, we show how to use this geometric formulation effectively on a computer. If we think of each simplex as a polygonal tile, we can imagine assembling custom databases by mixing and matching tiles. Queries on this database can be performed by drawing paths through the resulting tile formations, selecting records at the start-point of this path and retrieving corresponding records at its end-point.

  19. Constructing a global noise correlation database

    Science.gov (United States)

    Ermert, L. A.; Fichtner, A.; Sleeman, R.

    2013-12-01

    We report on the ongoing construction of an extensive global-scale database of ambient noise cross-correlation functions spanning a frequency range from seismic hum to oceanic microseisms (roughly 2 mHz to 0.2 Hz). The database - ultimately to be hosted by ORFEUS - will be used to study the distribution of microseismic and hum sources, and to perform multiscale full waveform inversion for crustal and mantle structure. To build the database, we acquire continuous time series data from permanent and temporary networks hosted mostly at IRIS and ORFEUS. We process and correlated the time series using a fully parallelised tool based on the Python package Obspy. Processing follows two main flows: We obtain both classical cross-correlation functions and phase cross-correlation functions. Phase cross-correlation is an amplitude-independent measure of waveform similarity. Either type of correlation can be used for the inversions. We stack individual time windows linearly. Additionally, we calculate the stack of instantaneous phases of the analytic cross-correlation signal, which can be included as optional processing step. Multiplying the linear stack by the phase stack downweights those parts of the linear stack that show little phase coherency. Thus, it accelerates the emergence of weak coherent signals, which is of particular importance for the processing of data from recently deployed or temporary stations that have only been recording for a short time. Obtaining and processing data for such a massive database requires considerable computational resources, offered by the Swiss National Supercomputing Centre (CSCS) in the form of HPC clusters specifically designed for large-scale data analysis. The data set will be made available to the scientific community via ORFEUS. By separately providing classical cross-correlation, phase cross-correlation and instantaneous phase stack, the database will offer relative flexibility for application in further studies. Many current

  20. The eNanoMapper database for nanomaterial safety information

    Directory of Open Access Journals (Sweden)

    Nina Jeliazkova

    2015-07-01

    Full Text Available Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs. Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API, and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state