WorldWideScience

Sample records for building stronger databases

  1. Advanced information technology: Building stronger databases

    Energy Technology Data Exchange (ETDEWEB)

    Price, D. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    This paper discusses the attributes of the Advanced Information Technology (AIT) tool set, a database application builder designed at the Lawrence Livermore National Laboratory. AIT consists of a C library and several utilities that provide referential integrity across a database, interactive menu and field level help, and a code generator for building tightly controlled data entry support. AIT also provides for dynamic menu trees, report generation support, and creation of user groups. Composition of the library and utilities is discussed, along with relative strengths and weaknesses. In addition, an instantiation of the AIT tool set is presented using a specific application. Conclusions about the future and value of the tool set are then drawn based on the use of the tool set with that specific application.

  2. BUILDING STRONGER STATE ENERGY PARTNERSHIPS

    Energy Technology Data Exchange (ETDEWEB)

    David Terry

    2002-04-22

    program and building greater support among State Energy Office Directors. Second, NASEO would work to improve the efficiency of America's schools by assisting states and DOE in promoting projects that result in more energy efficient (and clean energy) schools and a better learning environment. Third, NASEO was to identify opportunities, needs, and priorities related to the emerging public benefit funds/programs operated by many states emerging from utility restructuring. This third activity, while still a high-priority for the state energy offices, has not been funded under this agreement. Thus, no activity will be reported. The results of the two funded efforts described above are a significant increase in the awareness of RBA resources and assistance, as well as a better understanding of successful approaches to implementing RBA activities. This technical progress report includes an update of the progress during the first year of cooperative agreement DE-FC26-00nt40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities.

  3. BUILDING STRONGER STATE ENERGY PARTNERSHIPS WITH THE U.S. DEPARTMENT OF ENERGY

    Energy Technology Data Exchange (ETDEWEB)

    Kate Burke

    2002-11-01

    This technical progress report includes an update of the progress during the second year of cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities.

  4. Building Stronger State Energy Partnerships with the U.S. Department of Energy

    Energy Technology Data Exchange (ETDEWEB)

    Marks, Kate

    2011-09-30

    This final technical report details the results of total work efforts and progress made from October 2007 – September 2011 under the National Association of State Energy Officials (NASEO) cooperative agreement DE-FC26-07NT43264, Building Stronger State Energy Partnerships with the U.S. Department of Energy. Major topical project areas in this final report include work efforts in the following areas: Energy Assurance and Critical Infrastructure, State and Regional Technical Assistance, Regional Initiative, Regional Coordination and Technical Assistance, and International Activities in China. All required deliverables have been provided to the National Energy Technology Laboratory and DOE program officials.

  5. Building Stronger State Energy Partnerships with the U.S. Department of Energy

    Energy Technology Data Exchange (ETDEWEB)

    David Terry

    2008-09-30

    This final technical report details the results of total work efforts and progress made from July 2000 - July 2008 under the National Association of State Energy Officials (NASEO) cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. Major topical project areas in this final report include work efforts in the following areas: Rebuild America/Energy Smart Schools, Higher Education Initiative, Winter/Summer Fuels Outlook Conferences, Energy Emergency, Clean Energy Integration, Energy Star, and Office of Electricity Delivery and Energy Reliability. All required deliverables have been provided to the National Energy Technology Laboratory and DOE program officials.

  6. Geospatial database for heritage building conservation

    Science.gov (United States)

    Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.

    2014-02-01

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.

  7. BUILDING STRONGER STATE ENERGY PARTNERSHIPS WITH THE U.S. DEPARTMENT OF ENERGY

    Energy Technology Data Exchange (ETDEWEB)

    Kate Burke

    2003-09-01

    This technical progress report includes an update of the progress during the third year of cooperative agreement DE-FC26-00NT40802, Building Stronger State Energy Partnerships with the U.S. Department of Energy. The report also describes the barriers in conduct of the effort, and our assessment of future progress and activities. The approach of the project included three tasks during year three. First, NASEO and its Buildings Committee were to focus on raising awareness and coordination of Rebuild activities. Through education, one-on-one communications, and presentations at NASEO meetings and other events, staff and the committee will assist Rebuild officials in stimulating interest in the program and building greater support among State Energy Office Directors. The most recent subtasks added to the project, though not directly related to Rebuild America, fall under this initial task, and support: (a) state plans to implement integrated energy and environmental initiatives, including distributed generation technologies, and (b) initiation of a state collaborative on advanced turbines and hybrid systems. The advanced turbine piece was completed during this year. During the year, a new workplan was accepted by Rebuild America's Dan Sze to supplement the work in this task. This workplan is outlined below. Second, NASEO would work to improve the efficiency of America's schools by assisting states and DOE in promoting projects that result in more energy efficient and clean energy schools and a better learning environment. This task was fully completed during this year. The third task involves energy security issues which NASEO addressed by way of a Summer Fuels Outlook Conference held Tuesday, April 8, 2003. The purpose of this educational event was to inform state, federal, local, and other energy officials about the most recent transportation fuels data and trends. The public benefits part of this task was not funded for Year 3, thus no activity occurred.

  8. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  9. Stronger synergies

    CERN Multimedia

    Antonella Del Rosso

    2012-01-01

    CERN was founded 58 years ago under the auspices of UNESCO. Since then, both organisations have grown to become world leaders in their respective fields. The links between the two have always existed but today they are even stronger, with new projects under way to develop a more efficient way of exchanging information and devise a common strategy on topics of mutual interest.   CERN and UNESCO are a perfect example of natural partners: their common field is science and education is one of the pillars on which both are built. Historically, they share a common heritage. Both UNESCO and CERN were born of the desire to use scientific cooperation to rebuild peace and security in the aftermath of the Second World War. "Recently, building on our common roots and in close collaboration with UNESCO, we have been developing more structured links to ensure the continuity of the actions taken over the years," says Maurizio Bona, who is in charge of CERN relations with international orga...

  10. Building Database-Powered Mobile Applications

    OpenAIRE

    Paul POCATILU

    2012-01-01

    Almost all mobile applications use persistency for their data. A common way for complex mobile applications is to store data in local relational databases. Almost all major mobile platforms include a relational database engine. These databases engines expose specific API (Application Programming Interface) to be used by mobile applications developers for data definition and manipulation. This paper focus on database-based application models for several mobile platforms (Android, Symbian, Wind...

  11. Building a dynamic Web/database interface

    OpenAIRE

    Cornell, Julie.

    1996-01-01

    Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...

  12. An Algorithm for Building an Electronic Database

    OpenAIRE

    Cohen, Wess A.; Gayle, Lloyd B.; Patel, Nima P.

    2016-01-01

    Objective: We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. Methods: The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperati...

  13. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  14. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.

  15. Research on methods of designing and building digital seabed database

    Institute of Scientific and Technical Information of China (English)

    Su Tianyun; Liu Baohua; Zhai Shikui; Liang Ruicai; Zheng Yanpeng; Fu Qiang

    2007-01-01

    With a review of the recent development in digitalization and application of seabed data , this paper systematically proposed methods for integrating seabed data by analyzing its feature based on ORACLE database management system and advanced techniques of spatial data management. We did research on storage structure of seabed data, distributed- integrated database system, standardized spatial database and seabed metadata management system in order to effectively manage and use these seabed information in practical application . Finally , we applied the methods researched and proposed in this paper to build the Bohai Sea engineering geology database that stores engineering geology data and other seabed information from the Bohai Sea area . As a result , the Bohai Sea engineering geology database can effectively integrate huge amount of distributed and complicated seabed data to meet the practical requisition of Bohai Sea engineering geology environment exploration and exploitation.

  16. An overview of building morphological characteristics derived from 3D building databases.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M. J. (Michael J.); Burian, S. J. (Steven J.); Linger, S. P. (Steve P.); Velugubantla, S. P. (Srinivas, P.); Ratti, Carlo

    2002-01-01

    Varying levels of urban canopy parameterizations are frequently employed in atmospheric transport and dispersion codes in order to better account for the urban effect on the meteorology and diffusion. Many of these urban parameterizations need building-related parameters as input. Derivation of these building parameters has often relied on in situ 'measurements', a time-consuming and expensive process. Recently, 3D building databases have become more common for major cities worldwide and provide the hope of a more efficient route to obtaining building statistics. In this paper, we give an overview of computations we have performed for obtaining building morphological characteristics from 3D building databases for several southwestern US cities, including Los Angeles, Salt Lake City, and Phoenix.

  17. Strategies of Building a Stronger Sense of Community for Sustainable Neighborhoods: Comparing Neighborhood Accessibility with Community Empowerment Programs

    Directory of Open Access Journals (Sweden)

    Te-I Albert Tsai

    2014-05-01

    Full Text Available New Urbanist development in the U.S. aims at enhancing a sense of community and seeks to return to the design of early transitional neighborhoods which have pedestrian-oriented environments with retail shops and services within walking distances of housing. Meanwhile, 6000 of Taiwan’s community associations have been running community empowerment programs supported by the Council for Cultural Affairs that have helped many neighborhoods to rebuild so-called community cohesion. This research attempts to evaluate whether neighborhoods with facilities near housing and shorter travel distances within a neighborhood would promote stronger social interactions and form a better community attachment than neighborhoods that have various opportunities for residents to participate in either formal or informal social gatherings. After interviewing and surveying residents from 19 neighborhoods in Taipei’s Beitou District, and correlating the psychological sense of community with inner neighborhood’s daily travel distances and numbers of participatory activities held by community organizations under empowerment programs together with frequencies of regular individual visits and casual meetings, statistical evidence yielded that placing public facilities near residential locations is more effective than providing various programs for elevating a sense of community.

  18. building a comprehensive serials decision database at Virginia Tech

    OpenAIRE

    Metz, P.; Cosgriff, J.

    2000-01-01

    Although for many years academic libraries have relied on data on cost, library use, or citations to inform collection development decisions respecting serials, they have not fully exploited the possibilities for compiling numerous measures into comprehensive databases for decision support. The authors discuss the procedures used and the advantages realized from an effort to build such a resource at Virginia Polytechnic Institute and State University (Virginia Tech), where the available data ...

  19. Illinois hospital using Web to build database for relationship marketing.

    Science.gov (United States)

    Rees, T

    2000-01-01

    Silver Cross Hospital and Medical Centers, Joliet, Ill., is promoting its Web site as a tool for gathering health information about patients and prospective patients in order to build a relationship marketing database. The database will enable the hospital to identify health care needs of consumers in Joliet, Will County and many southwestern suburbs of Chicago. The Web site is promoted in a multimedia advertising campaign that invites residents to participate in a Healthy Living Quiz that rewards respondents with free health screenings. The effort is part of a growing planning and marketing strategy in the health care industry called customer relationship management (CRM). Not only does a total CRM plan offer health care organizations the chance to discover the potential for meeting consumers' needs; it also helps find any marketplace gaps that may exist. PMID:11184485

  20. Stronger Municipalities for Stronger Cities in Argentina

    OpenAIRE

    Rémy Prud'homme; Hervé Huntzinger; Pierre Kopp

    2004-01-01

    In recent years a number of studies have been devoted to the twin issues of economic development and of decentralization in Argentina. Many papers have tried to understand the complex system of intergovernmental relations. Most of them, however, have focussed on the role of provinces, and neglected the problems raised by municipalities. This paper tries to bridge this gap, and to suggest that stronger municipalities could contribute to produce stronger cities that would in turn foster economi...

  1. Building, Testing and Evaluating Database Clusters : OSA project

    OpenAIRE

    Kushanova, Olga

    2014-01-01

    The purpose of this study was to research idea and functionality of clustered database systems. Since relational databases started to lose their functionality in modern data size and manipulation a new solution had to be found to overcome the limitations. On one side the relational databases started to support clustered implementations, which made the database more reliable and helped to achieve better performance. On the other side, a totally new data store structure came with NoSQL movement...

  2. Lighter and stronger planes

    OpenAIRE

    Attard, Bonnie; Duca, Edward

    2015-01-01

    The price of fuel is a large cost burden on the aerospace industry. A lighter plane means cheaper flights, increased aircraft range, and less environmental pollution. http://www.um.edu.mt/think/lighter-and-stronger-planes/

  3. Summary of Adsorption/Desorption Experiments for the European Database on Indoor Air Pollution Sources in Buildings

    DEFF Research Database (Denmark)

    Kjær, Ulla Dorte; Tirkkonen, T.

    1996-01-01

    Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings.......Experimental data for adsorption/desorption in building materials. Contribution to the European Database on Indoor Air Pollution Sources in buildings....

  4. AQSIQ Builds an Import & Export Commodity Inspection & Quarantine Information Database

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ To comprehensively strengthen the quality of work and ensure the safe processing of import and export commodities, the General Administration of Quality Supervision, Inspection and Quarantine of China (AQSIQ) recently initiated the establishment of a China import and export commodity inspection and quarantine information database.

  5. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  6. Building an Organic Market Database - OrganicDataNetwork Training

    OpenAIRE

    Willer, Helga; Schaack, Diana

    2014-01-01

    About this training > The OrganicDataNetwork manual shows how a a database and the necessary tools for data processing of organic market data can be built. > The target group are collectors of organic market data. > The manual is a product of the OrganicDataNetwork project, which aims to improve European organic market data. > Further details are available in the manual in the OrganicDataNetwork website at www.organicdatanetwork.net.

  7. Research and application of ORACLE performance optimizing technologies for building airplane environment resource database

    Science.gov (United States)

    Zhang, Jianjun; Sun, Jianyong; Cheng, Conggao

    2013-03-01

    Many problems exist in processing experimental aircraft vibration (temperature, humidity) data and generating the intermediate calculations during the construction of airplane environment resource database, such as the need to deal with both structural and non-structural data, weak capacity of the client browser for data processing and massive network data transferring etc. To solve the above problems, some strategies on tuning and optimization performance of database are employed base on Oracle11g, which include data storage structure tuning, the memory configuration of the server, the disk I/O tuning and SQL statement tuning. The experimental results show that the performance of airplane environment resource database is enhanced about 80% compared with the database developed in the initial demonstration and validation phase. The application of new optimization strategies to the database construction can lay a sound foundation for finishing building airplane environment resource database.

  8. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework. PMID:17282834

  9. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  10. Building a GIS database in the Eastern Tennessee Seismic Zone

    Science.gov (United States)

    Akinpelu, M. O.; Vlahovic, G.; Arroucau, P.; Malhotra, R.; Powell, C. A.

    2010-12-01

    Eastern Tennessee contains one of the most seismically active regions in the eastern North America. The Eastern Tennessee Seismic Zone (ETSZ) is about 300 kilometers long and extends from northwestern Georgia through eastern Tennessee [Study Area: 34°N to 37°N; 86°W to 82.5°W]. It is the second most active earthquake zone of the United States east of the Rocky Mountains. Only the New Madrid Seismic Zone is releasing more seismic strain energy. Unlike the New Madrid Seismic Zone, the ETSZ did not experience any destructive earthquake in historical time; however, its seismogenic potential is not well understood. The spatial dimensions of the ETSZ and its association with potential field anomalies suggest that collecting and organizing all the relevant data into a GIS geodatabase could increase our understanding of that region. Geographic Information System (GIS) software can be used to acquire, share, maintain and modify geospatial data sets. In this work, ArcGIS 9.3.2 is used to build a geodatabase which includes topography, earthquake information such as locations, magnitudes and focal mechanisms, potential field data, P and S wave velocity anomalies inferred from local tomographic inversions of local events, seismic transects, digital geological maps and others relevant datasets. Raw datasets were downloaded from several earth science institutions and were edited before being imported to ArcGIS. Various geoprocessing techniques, such as geo-referencing, digitizing, and surface interpolation were used to manipulate and analyze these data. We show how this compilation can be used to analyze the spatial relationships between earthquake locations and other data layers. The long-term idea behind this project is to build an information resource that will be continuously updated and will eventually encompass data related to intraplate seismicity in the entire central and eastern United States. It will be made available to researchers, students, the general public and

  11. Lessons learned while building the Deepwater Horizon Database: Toward improved data sharing in coastal science

    Science.gov (United States)

    Thessen, Anne E.; McGinnis, Sean; North, Elizabeth W.

    2016-02-01

    Process studies and coupled-model validation efforts in geosciences often require integration of multiple data types across time and space. For example, improved prediction of hydrocarbon fate and transport is an important societal need which fundamentally relies upon synthesis of oceanography and hydrocarbon chemistry. Yet, there are no publically accessible databases which integrate these diverse data types in a georeferenced format, nor are there guidelines for developing such a database. The objective of this research was to analyze the process of building one such database to provide baseline information on data sources and data sharing and to document the challenges and solutions that arose during this major undertaking. The resulting Deepwater Horizon Database was approximately 2.4 GB in size and contained over 8 million georeferenced data points collected from industry, government databases, volunteer networks, and individual researchers. The major technical challenges that were overcome were reconciliation of terms, units, and quality flags which were necessary to effectively integrate the disparate data sets. Assembling this database required the development of relationships with individual researchers and data managers which often involved extensive e-mail contacts. The average number of emails exchanged per data set was 7.8. Of the 95 relevant data sets that were discovered, 38 (40%) were obtained, either in whole or in part. Over one third (36%) of the requests for data went unanswered. The majority of responses were received after the first request (64%) and within the first week of the first request (67%). Although fewer than half of the potentially relevant datasets were incorporated into the database, the level of sharing (40%) was high compared to some other disciplines where sharing can be as low as 10%. Our suggestions for building integrated databases include budgeting significant time for e-mail exchanges, being cognizant of the cost versus

  12. Attributes of the Federal Energy Management Program's Federal Site Building Characteristics Database

    Energy Technology Data Exchange (ETDEWEB)

    Loper, Susan A.; Sandusky, William F.

    2010-12-31

    Typically, the Federal building stock is referred to as a group of about one-half million buildings throughout the United States. Additional information beyond this level is generally limited to distribution of that total by agency and maybe distribution of the total by state. However, additional characterization of the Federal building stock is required as the Federal sector seeks ways to implement efficiency projects to reduce energy and water use intensity as mandated by legislation and Executive Order. Using a Federal facility database that was assembled for use in a geographic information system tool, additional characterization of the Federal building stock is provided including information regarding the geographical distribution of sites, building counts and percentage of total by agency, distribution of sites and building totals by agency, distribution of building count and floor space by Federal building type classification by agency, and rank ordering of sites, buildings, and floor space by state. A case study is provided regarding how the building stock has changed for the Department of Energy from 2000 through 2008.

  13. DEEP: A Database of Energy Efficiency Performance to Accelerate Energy Retrofitting of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof; Chen, Yixing; Piette, Mary Ann

    2015-05-01

    The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct

  14. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents...

  15. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  16. Building up a collaborative article database out of Open Source components

    Directory of Open Access Journals (Sweden)

    Stefan Kandera

    2010-12-01

    Full Text Available Members of a Swiss, Austrian and German network of health care libraries planned to build a collaborative article reference database. Since different libraries were cataloging articles on their own, and many national health care journals can not be found in other repositories (free or commercial the goal was to merge existing collections and to enable participants to catalog articles on their own. As of November, 2010, the database http://bibnet.org contains 45,000 article references from 17 libraries. In this paper we will discuss how the software concept evolved and the problems we encountered during this process.

  17. The Wenchuan earthquake creation of a rich database of building performance

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    After the Wenchuan earthquake,The Institute of Engineering Mechanics (IEM) performed an extensive and comprehensive damage survey of the large area affected by the earthquake. Seismic codes in China were revised and updated after the catastrophic 1976 Tangshan earthquake. However,until the Wenchuan earthquake the seismic code provisions were not tested by a large earthquake. Some 5000 buildings,exposed to intensities VI to XI,were investigated in great detail immediately after the earthquake. The investigation and the surveys covered both seismically designed (fortified) buildings and non-code compliant buildings. In the process a comprehensive and documented database of building performance was compiled,which would be very valuable for further research,improvement of the seismic code,improvement of the construction practices,and disaster mitigation planning and management. The database dominantly contains the most prevalent structural types in the region: 1) reinforced and un-reinforced masonry structures; 2) masonry buildings with reinforced frame in the lower stories,and 3) reinforced concrete frame structures. Observed damage characteristics of various structural types were studied and documented,damage patterns analyzed,and corresponding damage probability matrices derived from the data collected during this survey. It is our hope that this investigation and the published material will be utilized for the revision of the seismic codes,leading to a higher level of life safety and damage reduction in future earthquakes.

  18. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    OpenAIRE

    Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genea...

  19. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  20. Design and Building of the New Countryside Construction Database Based on ArcSDE and SQL Server

    Institute of Scientific and Technical Information of China (English)

    Hongji; ZHANG; Xuping; LI; Yong; LUO; Lianze; TENG; Aiqun; DAI

    2013-01-01

    Building the new countryside construction database plays an important role in improving the construction efficiency,and enhancing the level of major project management.On the basis of detailed analysis of features of the new countryside construction data,we give an overview of the database design based on ArcSDE and SQL Server,and elaborate the association between data classification organization,database conceptual design,logical design,spatial data,and thematic attribute data.Finally,taking the provincial new countryside demonstration zone in Yanjiang District of Sichuan Province for example,we build the new countryside construction database.

  1. Energy Performance Database of Building Heritage in the Region of Umbria, Central Italy

    Directory of Open Access Journals (Sweden)

    Cinzia Buratti

    2015-07-01

    Full Text Available Household energy consumption has been increasing in the last decades; the residential sector is responsible for about 40% of the total final energy use in Europe. Energy efficiency measures can both reduce energy needs of buildings and energy-related CO2 emissions. For this reason, in recent years, the European Union has been making efforts to enhance energy saving in buildings by introducing various policies and strategies; in this context, a common methodology was developed to assess and to certify energy performance of buildings. The positive effects obtained by energy efficiency measures need to be verified, but measuring and monitoring building energy performance is time consuming and financially demanding. Alternatively, energy efficiency can also be evaluated by specific indicators based on energy consumption. In this work, a methodology to investigate the level of energy efficiency reached in the Umbria Region (Central Italy is described, based on data collected by energy certificates. In fact, energy certificates, which are the outcomes of simulation models, represent a useful and available tool to collect data related to the energy use of dwellings. A database of building energy performance was developed, in which about 6500 energy certificates of residential buildings supplied by Umbria region were inserted. On the basis of this data collection, average energy and CO2 indicators related to the building heritage in Umbria were estimated and compared to national and international indicators derived from official sources. Results showed that the adopted methodology in this work can be an alternative method for the evaluation of energy indicators; in fact, the ones calculated considering simulation data were similar to the ones reported in national and international sources. This allowed to validate the adopted methodology and the efficiency of European policies.

  2. TECHNIQUES OF 3D COMPONENT DATABASE ESTABLISHING AND QUALITY CONTROL FOR WOODEN BUILDING

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the architectural survey project of “the Chi Lin Nunnery Redevelopment "in Hong Kong,this paper attempts to investigate the techniques of building 3D digital document of large_scale timber structure and quality con trol during construction by computer_based 3D simulation for the whole project.T here were several key issues including primary data acquisition,3D modeling and display,pre_assembling the total building and quality examination,etc.In this pa per,some useful experiments,such as the new applications of CCD digital cameras,image and graph processing software packages (CAD,Photoshop,Photomod eler,Vexcel,etc.) to the architectures are also presented.These methods intr oduced in this paper are suitable for image and graph integrated database buildi ng of complicated architectures,and useful for conveniently maintaining and rec onstructing the ancient architectures.

  3. A method for building and evaluating formal specifications of object-oriented conceptual models of database systems

    NARCIS (Netherlands)

    Wieringa, R.J.

    1993-01-01

    This report describes a method called MCM (Method for Conceptual Modeling) for building and evaluating formal specifications of object-oriented models of database system behavior. An important aim of MCM is to bridge the gap between formal specification and informal understanding. Building a MCM mod

  4. Clearinghouse-type system and database for post‑seismic buildings assessment

    Directory of Open Access Journals (Sweden)

    Claudiu-Sorin DRAGOMIR

    2015-07-01

    Full Text Available The paper presents the outline and function of a system for field assessment and the structure of a database for post-earthquake inspections, to be used with the Romanian Methodology for rapid buildings assessment, indicative ME003/2007, and for other research purposes. Functioning mode of the system for assessment in emergencies caused by earthquakes and the structure of the data collected by the field teams that participated in post-earthquake inspection are presented in this paper. A system for interdisciplinary assessment was taken into account for strong Vrancea intermediate earthquakes, but it will be useful also for crustal earthquakes of local or regional extent. The proposed system envisages a coordination centre and a clearinghouse at Bucharest, in NIRD “URBAN-INCERC” that coordinates branches of the territory (Iaşi, Timişoara and Cluj Napoca. Given the large area affected by Vrancea earthquakes, the system will pursue the establishment of regional centers also in the cities of Braşov and Constanţa, where there are universities in the field. To demonstrate the relevance of the system, the paper presents a case study of two buildings located in Bucharest downtown. By exploiting a software application created at INCERC Bucharest Branch, numerical values and charts can be obtained for all buildings investigated by the INCERC inspectors.

  5. THE SCHEME FOR THE DATABASE BUILDING AND UPDATING OF 1:10 000 DIGITAL ELEVATION MODELS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The National Bureau of Surveying and Mapping of China has planned to speed up the development of spatial data infrastructure (SDI) in the coming few years. This SDI consists of four types of digital products, i. e., digital orthophotos, digital elevation models,digital line graphs and digital raster graphs. For the DEM,a scheme for the database building and updating of 1:10 000 digital elevation models has been proposed and some experimental tests have also been accomplished. This paper describes the theoretical (and/or technical)background and reports some of the experimental results to support the scheme. Various aspects of the scheme such as accuracy, data sources, data sampling, spatial resolution, terrain modeling, data organization, etc are discussed.

  6. Greater Than The Sum of Its Parts:Building Up A Co-operative Database of Pearl River Delta Collection

    Institute of Scientific and Technical Information of China (English)

    PaulW.T.Poon; Ph.D.,F.L.A

    1994-01-01

    This paper starts with a brief description of what a database is followed by a short history of the development of the database system and its use; it also notes the proliferation of various kinds of databases in the 1990s. It then goes on to outline the background of establishing a Pearl River Delta Collection at the City University of Hong Kong and the Hong Kong Lingnan College. One of the tasks in this project is to build up a database of Pearl River Delta-related materials available in all the UPGC(University and Polytechnic Grant Committee)libraries in Hong Kong. The database design and structure are described, and the problems associated with data collection, source data, and updating together with their solutions are explained.

  7. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  8. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  9. Negative weights makes adversaries stronger

    CERN Document Server

    Hoyer, P; Spalek, R; Hoyer, Peter; Lee, Troy; Spalek, Robert

    2006-01-01

    The quantum adversary method is one of the most successful techniques for proving lower bounds on quantum query complexity. It gives optimal lower bounds for many problems, has application to classical complexity in formula size lower bounds, and is versatile with equivalent formulations in terms of weight schemes, eigenvalues, and Kolmogorov complexity. All these formulations are information-theoretic and rely on the principle that if an algorithm successfully computes a function then, in particular, it is able to distinguish between inputs which map to different values. We present a stronger version of the adversary method which goes beyond this principle to make explicit use of the existence of a measurement in a successful algorithm which gives the correct answer, with high probability. We show that this new method, which we call ADV+-, has all the advantages of the old: it is a lower bound on bounded-error quantum query complexity, its square is a lower bound on formula size, and it behaves well with res...

  10. Strategy and your stronger hand.

    Science.gov (United States)

    Moore, Geoffrey A

    2005-12-01

    There are two kinds of businesses in the world, says the author. Knowing what they are--and which one your company is--will guide you to the right strategic moves. One kind includes businesses that compete on a complex-systems model. These companies have large enterprises as their primary customers. They seek to grow a customer base in the thousands, with no more than a handful of transactions per customer per year (indeed, in some years there may be none), and the average price per transaction ranges from six to seven figures. In this model, 1,000 enterprises each paying dollar 1 million per year would generate dollar 1 billion in annual revenue. The other kind of business competes on a volume-operations model. Here, vendors seek to acquire millions of customers, with tens or even hundreds of transactions per customer per year, at an average price of relatively few dollars per transaction. Under this model, it would take 10 million customers each spending dollar 8 per month to generate nearly dollar 1 billion in revenue. An examination of both models shows that they could not be further apart in their approach to every step along the classic value chain. The problem, though, is that companies in one camp often attempt to create new value by venturing into the other. In doing so, they fail to realize how their managerial habits have been shaped by the model they've grown up with. By analogy, they have a "handedness"--the equivalent of a person's right- or left-hand dominance--that makes them as adroit in one mode as they are awkward in the other. Unless you are in an industry whose structure forces you to attempt ambidexterity (in which case, special efforts are required to manage the inevitable dropped balls), you'll be far more successful making moves that favor your stronger hand. PMID:16334582

  11. A Stronger Reason for the Right to Sign Languages

    Science.gov (United States)

    Trovato, Sara

    2013-01-01

    Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…

  12. Natural radioactivity in building materials in the European Union: a database and an estimate of radiological significance.

    Science.gov (United States)

    Trevisi, R; Risica, S; D'Alessandro, M; Paradiso, D; Nuccetelli, C

    2012-02-01

    The authors set up a database of activity concentration measurements of natural radionuclides (²²⁶Ra, ²³²Th and ⁴⁰K) in building material. It contains about 10,000 samples of both bulk material (bricks, concrete, cement, natural- and phosphogypsum, sedimentary and igneous bulk stones) and superficial material (igneous and metamorphic stones) used in the construction industry in most European Union Member States. The database allowed the authors to calculate the activity concentration index I--suggested by a European technical guidance document and recently used as a basis for elaborating the draft Euratom Basic Safety Standards Directive--for bricks, concrete and phosphogypsum used in the European Union. Moreover, the percentage could be assessed of materials possibly subject to restrictions, if either of the two dose criteria proposed by the technical guidance were to be adopted.

  13. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    Science.gov (United States)

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  14. Energy Performance Database of Building Heritage in the Region of Umbria, Central Italy

    OpenAIRE

    Cinzia Buratti; Francesco Asdrubali; Domenico Palladino; Antonella Rotili

    2015-01-01

    Household energy consumption has been increasing in the last decades; the residential sector is responsible for about 40% of the total final energy use in Europe. Energy efficiency measures can both reduce energy needs of buildings and energy-related CO 2 emissions. For this reason, in recent years, the European Union has been making efforts to enhance energy saving in buildings by introducing various policies and strategies; in this context, a common methodology was developed to assess and t...

  15. Building-Up a comprehensive database of flavonoids based on nuclear magnetic resonance data.

    NARCIS (Netherlands)

    Moco, S.I.A.; Tseng, L.; Spraul, M.; Chen, Z.; Vervoort, J.J.M.

    2006-01-01

    The improvements in separation and analysis of complex mixtures by LC-NMR during the last decade have shifted its emphasis from data acquisition to data analysis. For correct data analysis, not only high quality datasets are necessary, but adequate software and adequate databases for semi (or fully)

  16. Building a Nationwide Bibliographic Database: The Role of Local Shared Automated Systems.

    Science.gov (United States)

    Wetherbee, Louella V.

    1992-01-01

    Discusses the actual and potential impact of local shared automated library systems on the development of a comprehensive nationwide bibliographic database (NBD). Shared local automated systems are described; four local shared automated system models are compared; and the current interface between local shared automated library systems and the NBD…

  17. A bioinformatics pipeline to build a knowledge database for in silico antibody engineering.

    Science.gov (United States)

    Zhao, Shanrong; Lu, Jin

    2011-04-01

    A challenge to antibody engineering is the large number of positions and nature of variation and opposing concerns of introducing unfavorable biochemical properties. While large libraries are quite successful in identifying antibodies with improved binding or activity, still only a fraction of possibilities can be explored and that would require considerable effort. The vast array of natural antibody sequences provides a potential wealth of information on (1) selecting hotspots for variation, and (2) designing mutants to mimic natural variations seen in hotspots. The human immune system can generate an enormous diversity of immunoglobulins against an almost unlimited range of antigens by gene rearrangement of a limited number of germline variable, diversity and joining genes followed by somatic hypermutation and antigen selection. All the antibody sequences in NCBI database can be assigned to different germline genes. As a result, a position specific scoring matrix for each germline gene can be constructed by aligning all its member sequences and calculating the amino acid frequencies for each position. The position specific scoring matrix for each germline gene characterizes "hotspots" and the nature of variations, and thus reduces the sequence space of exploration in antibody engineering. We have developed a bioinformatics pipeline to conduct analysis of human antibody sequences, and generated a comprehensive knowledge database for in silico antibody engineering. The pipeline is fully automatic and the knowledge database can be refreshed anytime by re-running the pipeline. The refresh process is fast, typically taking 1min on a Lenovo ThinkPad T60 laptop with 3G memory. Our knowledge database consists of (1) the individual germline gene usage in generation of natural antibodies; (2) the CDR length distributions; and (3) the position specific scoring matrix for each germline gene. The knowledge database provides comprehensive support for antibody engineering

  18. A bioinformatics pipeline to build a knowledge database for in silico antibody engineering.

    Science.gov (United States)

    Zhao, Shanrong; Lu, Jin

    2011-04-01

    A challenge to antibody engineering is the large number of positions and nature of variation and opposing concerns of introducing unfavorable biochemical properties. While large libraries are quite successful in identifying antibodies with improved binding or activity, still only a fraction of possibilities can be explored and that would require considerable effort. The vast array of natural antibody sequences provides a potential wealth of information on (1) selecting hotspots for variation, and (2) designing mutants to mimic natural variations seen in hotspots. The human immune system can generate an enormous diversity of immunoglobulins against an almost unlimited range of antigens by gene rearrangement of a limited number of germline variable, diversity and joining genes followed by somatic hypermutation and antigen selection. All the antibody sequences in NCBI database can be assigned to different germline genes. As a result, a position specific scoring matrix for each germline gene can be constructed by aligning all its member sequences and calculating the amino acid frequencies for each position. The position specific scoring matrix for each germline gene characterizes "hotspots" and the nature of variations, and thus reduces the sequence space of exploration in antibody engineering. We have developed a bioinformatics pipeline to conduct analysis of human antibody sequences, and generated a comprehensive knowledge database for in silico antibody engineering. The pipeline is fully automatic and the knowledge database can be refreshed anytime by re-running the pipeline. The refresh process is fast, typically taking 1min on a Lenovo ThinkPad T60 laptop with 3G memory. Our knowledge database consists of (1) the individual germline gene usage in generation of natural antibodies; (2) the CDR length distributions; and (3) the position specific scoring matrix for each germline gene. The knowledge database provides comprehensive support for antibody engineering

  19. Visual Localization in Urban Area Using Orthogonal Building Boundaries and a GIS Database

    Institute of Scientific and Technical Information of China (English)

    LI Haifeng; LIU Jingtai; LU Xiang

    2012-01-01

    A framework is presented for robustly estimating the location of a mobile robot in urban areas based on images extracted from a monocular onboard camera, given a 2D map with building outlines with neither 3D geometric information nor appearance data. The proposed method firstly reconstructs a set of vertical planes by sampling and clustering vertical lines from the image with random sample consensus (RANSAC), using the derived 1D homographies to inform the planar model. Then, an optimal autonomous localization algorithm based on the 2D building boundary map is proposed. The physical experiments are carried out to validate the robustness and accuracy of our localization approach.

  20. Building a highly available and intrusion tolerant database security and protection system (DSPS)

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.

  1. Building a highly available and intrusion tolerant database security and protection system ( DSPS)

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 杨小虎; 董金祥

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performane analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections ; and that proactive security is very useful and practical for large, critical applications.

  2. The Fluka Linebuilder and Element Database: Tools for Building Complex Models of Accelerators Beam Lines

    CERN Document Server

    Mereghetti, A; Cerutti, F; Versaci, R; Vlachoudis, V

    2012-01-01

    Extended FLUKA models of accelerator beam lines can be extremely complex: heavy to manipulate, poorly versatile and prone to mismatched positioning. We developed a framework capable of creating the FLUKA model of an arbitrary portion of a given accelerator, starting from the optics configuration and a few other information provided by the user. The framework includes a builder (LineBuilder), an element database and a series of configuration and analysis scripts. The LineBuilder is a Python program aimed at dynamically assembling complex FLUKA models of accelerator beam lines: positions, magnetic fields and scorings are automatically set up, and geometry details such as apertures of collimators, tilting and misalignment of elements, beam pipes and tunnel geometries can be entered at user’s will. The element database (FEDB) is a collection of detailed FLUKA geometry models of machine elements. This framework has been widely used for recent LHC and SPS beam-machine interaction studies at CERN, and led to a dra...

  3. Building a fingerprint database for modern art materials: PIXE analysis of commercial painting and drawing media

    Science.gov (United States)

    Zucchiatti, A.; Climent-Font, A.; Gómez-Tejedor, J. García; Martina, S.; Muro García, C.; Gimeno, E.; Hernández, P.; Canelo, N.

    2015-11-01

    We have examined by PIXE (and by RBS in parallel) about 180 samples of commercial painting and drawing media including pencils, pastels, waxes, inks, paints and paper. Given the high PIXE sensitivity we produced X-ray spectra at low collected charges and currents, operating in good conservation conditions. For drawing media containing inorganic components or a unique marker element, we have defined colouring agent fingerprints which correspond, when applicable, to the composition declared by the manufacturer. For thin layers, the ratios of areal densities of elements are close to those expected given the declared composition, which is promising from the perspective of compiling the database. The quantitative PIXE and RBS analysis of part of the set of samples is provided.

  4. Isotopic ratio analysis of cattle tail hair: A potential tool in building the database for cow milk geographical traceability.

    Science.gov (United States)

    Behkami, Shima; Zain, Sharifuddin Md; Gholami, Mehrdad; Bakirdere, Sezgin

    2017-02-15

    The potential for the isotopic ratio analysis of cattle tail hair in determining the geographical origin of raw cow milk in Peninsular Malaysia had been investigated in this research using exploratory visualization. A significant positive correlation (pmilk with that of hair which indicated that these matrices could be used in tracing the geographical origin of animal produce and tissues, and there is a possibility that hair could be used as a substitute in building the database for the geographical traceability of milk. It was also observed that both hair and milk isotopic ratio correlations exhibited separation between the northern and southern regions. The accuracy of using isotopic ratio in determining geographical discrimination had been clearly demonstrated when several commercial milk samples from the same regions under the study were correctly assigned to the appropriate geographical clusters. PMID:27664656

  5. Building a Patient-Specific Risk Score with a Large Database of Discharge Summary Reports.

    Science.gov (United States)

    Qu, Zhi; Zhao, Lue Ping; Ma, Xiemin; Zhan, Siyan

    2016-01-01

    BACKGROUND There is increasing interest in clinical research with electronic medical data, but it often faces the challenges of heterogeneity between hospitals. Our objective was to develop a single numerical score for characterizing such heterogeneity via computing inpatient mortality in treating acute myocardial infarction (AMI) patients based on diagnostic information recorded in the database of Discharge Summary Reports (DSR). MATERIAL AND METHODS Using 4 216 135 DSRs of 49 tertiary hospitals from 2006 to 2010 in Beijing, more than 200 secondary diagnoses were identified to develop a risk score for AMI (n=50 531). This risk score was independently validated with 21 571 DSRs from 65 tertiary hospitals in 2012. The c-statistics of new risk score was computed as a measure of discrimination and was compared with the Charlson comorbidity index (CCI) and its adaptions for further validation. RESULTS We finally identified and weighted 22 secondary diagnoses using a logistic regression model. In the external validation, the novel risk score performed better than the widely used CCI in predicting in-hospital mortality of AMI patients (c-statistics: 0.829, 0.832, 0.824 vs. 0.775, 0.773, and 0.710 in training, testing, and validating dataset, respectively). CONCLUSIONS The new risk score developed from DSRs outperform the existing administrative data when applied to healthcare data from China. This risk score can be used for adjusting heterogeneity between hospitals when clinical data from multiple hospitals are included. PMID:27318825

  6. Towards Global QSAR Model Building for Acute Toxicity: Munro Database Case Study

    Directory of Open Access Journals (Sweden)

    Swapnil Chavan

    2014-10-01

    Full Text Available A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  7. The IMPEx Protocol - building bridges between scientific databases and online tools

    Science.gov (United States)

    Al-Ubaidi, T.; Khodachenko, M. L.; Kallio, E. J.; Génot, V.; Modolo, R.; Hess, S.; Schmidt, W.; Scherf, M.; Topf, F.; Alexeev, I. I.; Gangloff, M.; Budnik, E.; Bouchemit, M.; Renard, B.; Bourrel, N.; Penou, E.; André, N.; Belenkaya, E. S.

    2014-04-01

    The FP7-SPACE project IMPEx (http://impex-fp7.oeaw.ac.at) was established as a result of scientific collaboration between research teams from Austria, Finland, France, and Russia, working on the integration of a set of data mining, analysis and modeling tools in the field of space plasma and planetary physics. The primary goal of the project is to bridge the gap between spacecraft measurements and up-to-date computational models of planetary environments, enabling their joint operation for a better understanding of related physical phenomena. The IMPEx Protocol constitutes one of the cornerstones of the integration effort. While the IMPEx Data Model assures that the information exchanged can be 'understood' and hence processed by every participating tool or database system, the protocol provides the means to leverage specific functionalities of the respective host system in conjunction with the data provided. Examples thereof would be services for calculating field lines and particle trajectories, on-the-fly modeling runs with specific parameters and so forth. Additionally there are also utility methods available that allow to e.g. access specific data files or support search interfaces by providing ranked lists of stored modeling runs for a given set of (upstream) parameters. The presentation offers an overview of the IMPEx protocol and addresses the motivation for some of the (technical)design decisions taken during the development process. Further the resulting SOAP based web service interface is discussed and individual services and their applications are addressed specifically. Last but not least the first available implementations of the protocol are presented and a brief overview of tools already leveraging the IMPEx protocol is provided. The presentation closes with an outlook on possible future applications as well as extensions of the IMPEx protocol, including information on how to get started when implementing the IMPEx protocol, in order to join the

  8. Building and Analyzing SURGEDAT: The World's Most Comprehensive Storm Surge Database

    Science.gov (United States)

    Needham, H.; Keim, B.

    2012-12-01

    SURGEDAT, the world's most comprehensive tropical storm surge database, has identified and mapped the location and height of hundreds of global storm surge events. This project originated with a study that identified more than 200 tropical surge events along the U.S. Gulf Coast. Spatial analysis of these data reveal that the central and western Gulf Coast observe more frequent and higher magnitude surges, whereas much of the eastern Gulf Coast, including the west coast of Florida, experiences less storm surge activity. Basin-wide return period analysis of these data estimate a 100-year return period of 8.20m, and a 10-year return period of 4.95m. Return period analysis of 10 sub-regions within the basin reveal that the highest surge levels occur in the Southeast Louisiana/ Mississippi zone, which includes the New Orleans metropolitan area. The 100-year surge level in this zone is estimated to be 7.67m. The Southeast Texas/ Southwest Louisiana zone, which includes the Houston metropolitan area, has the second highest surge levels, with a 100-year storm surge estimate of 6.30m. Surge levels are lower on the west coast of Florida, where the 100-year surge level is estimated between three and four meters. Expansion of this work includes mapping all high water marks for each surge event and the creation of a search-by-location web tool, which enables users to see the entire storm surge history for specific locations. In addition, the dataset has expanded internationally and now includes more than 500 surge events, as surges have now been identified in all the major ocean basins that experience tropical cyclones. International partnerships are sought to further expand this work, particularly in Australia, China, Japan, Philippines, India, Bangladesh, Myanmar, Mexico and various countries in Oceania and the Caribbean.; ;

  9. Brain Gym[R]: Building Stronger Brains or Wishful Thinking?

    Science.gov (United States)

    Hyatt, Keith J.

    2007-01-01

    As part of the accountability movement, schools are increasingly called upon to provide interventions that are based on sound scientific research and that provide measurable outcomes for children. Brain Gym[R] is a popular commercial program claiming that adherence to its regimen will result in more efficient learning in an almost miraculous…

  10. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hargrove, Stephanie [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chin, Shih-Miao [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wilson, Daniel W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lim, Hyeonsup [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chen, Jiaoli [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Taylor, Rob [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Bruce [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Diane [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-09

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) and FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction

  11. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Hargrove, Stephanie [ORNL; Chin, Shih-Miao [ORNL; Wilson, Daniel W [ORNL; Taylor, Rob D [ORNL; Davidson, Diane [ORNL

    2016-09-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) and FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction

  12. Bibliometric analysis of Spanish scientific publications in the subject Construction & Building Technology in Web of Science database (1997-2008

    Directory of Open Access Journals (Sweden)

    Rojas-Sola, J. I.

    2010-12-01

    Full Text Available In this paper the publications from Spanish institutions listed in the journals of the Construction & Building Technology subject of Web of Science database for the period 1997- 2008 are analyzed. The number of journals in whose is published is 35 and the number of articles was 760 (Article or Review. Also a bibliometric assessment has done and we propose two new parameters: Weighted Impact Factor and Relative Impact Factor; also includes the number of citations and the number documents at the institutional level. Among the major production Institutions with greater scientific production, as expected, the Institute of Constructional Science Eduardo Torroja (CSIC, while taking into account the weighted impact factor ranks first University of Vigo. On the other hand, only two journals Cement and Concrete Materials and Materials de Construction agglutinate the 45.26% of the Spanish scientific production published in the Construction & Building Technology subject, with 172 papers each one. Regarding international cooperation, include countries such as England, Mexico, United States, Italy, Argentina and France.

    En este trabajo se analizan las publicaciones procedentes de instituciones españolas recogidas en las revistas de la categoría Construction & Building Technology de la base de datos Web of Science para el periodo 1997-2008. El número de revistas incluidas es de 35 y el número de artículos publicados ha sido de 760 (Article o Review. Se ha realizado una evaluación bibliométrica con dos nuevos parámetros: Factor de Impacto Ponderado y Factor de Impacto Relativo; asimismo se incluyen el número de citas y el número de documentos a nivel institucional. Entre los centros con una mayor producción científica destaca, como era de prever, el Instituto de Ciencias de la Construcción Eduardo Torroja (CSIC, mientras que atendiendo al Factor de Impacto Ponderado ocupa el primer lugar la Universidad de Vigo. Por otro lado, sólo dos

  13. Chemical reaction due to stronger Ramachandran interaction

    Indian Academy of Sciences (India)

    Andrew Das Arulsamy

    2014-05-01

    The origin of a chemical reaction between two reactant atoms is associated with the activation energy, on the assumption that, high-energy collisions between these atoms, are the ones that overcome the activation energy. Here, we show that a stronger attractive van der Waals (vdW) and electron-ion Coulomb interactions between two polarized atoms are responsible for initiating a chemical reaction, either before or after the collision. We derive this stronger vdW attraction formula exactly using the quasi one-dimensional Drude model within the ionization energy theory and the energy-level spacing renormalization group method. Along the way, we expose the precise physical mechanism responsible for the existence of a stronger vdW interaction for both long and short distances, and also show how to technically avoid the electron-electron Coulomb repulsion between polarized electrons from these two reactant atoms. Finally, we properly and correctly associate the existence of this stronger attraction with Ramachandran’s `normal limits’ (distance shorter than what is allowed by the standard vdW bond) between chemically nonbonded atoms.

  14. LHC Season 2: A stronger machine

    CERN Multimedia

    Dominguez, Daniel

    2015-01-01

    1) New magnets / De nouveaux aimants 2) Stronger connections / Des jonctions électriques renforcées 3) Safer magnets / Des aimants plus sûrs 4) Higher energy beams / Des faisceaux d’énergie plus élevée 5) Narrower beams / Des faisceaux plus serrés 6) Smaller but closer proton packets / Des groupes de protons plus petits mais plus rapprochés 7) Higher voltage / Une tension plus haute 8) Superior cryogenics / Un système cryogénique amélioré 9) Radiation-resistant electronics / Une électronique qui résiste aux radiations 10) More secure vacuum / Un vide plus sûr

  15. THE BUILDING OF THE SPATIAL DATABASE OF THE SATCHINEZ ORNITHOLOGICAL RESERVE AS A PREMISE OF MODERN ECOLOGICAL RESEARCH

    Directory of Open Access Journals (Sweden)

    M. Török-Oance

    2005-01-01

    Full Text Available The creation of a database for the Ornithological Reserve “The Satchinez Marshes” was a necessity for a modern and complex ecological research. This database offers the possibility of a precise localization of the species of plants and animals identified in the field and it is a genuine base for the identification of the main types of habitats and ecosystems in the reserve. With the help of the Terrain Numerical Model, the analysis, at the level of pixels, of the abiotical factors involved in the repartition of ecosystems was made possible, as well as the three-dimensions visualizations of all the results. With the help of the aerophotograms taken in 1963 and 1973, we reconstructed the situation of the reserve at that time, by creating a database of the terrain usage at that time. The same thing was done in 2004, using more diverse sources: cadastral planes, images taken from the satellite, air photos taken in the same year and last, but not least, data collected in the field in 2003-2004. At the same time, this database can be considered a bench-mark (for the year 2004 for the identification of the modifications that occurred during 1963 and 2004, but for future research as well.

  16. States agree on stronger physical protection regime

    International Nuclear Information System (INIS)

    Full text: Delegates from 89 countries agreed on 8 July to fundamental changes that will substantially strengthen the Convention on the Physical Protection of Nuclear Material (CPPNM). IAEA Director General Mohamed ElBaradei welcomed the agreement in saying 'This new and stronger treaty is an important step towards greater nuclear security by combating, preventing, and ultimately punishing those who would engage in nuclear theft, sabotage or even terrorism. It demonstrates that there is indeed a global commitment to remedy weaknesses in our nuclear security regime.' The amended CPPNM makes it legally binding for States Parties to protect nuclear facilities and material in peaceful domestic use, storage as well as transport. It will also provide for expanded cooperation between and among States regarding rapid measures to locate and recover stolen or smuggled nuclear material, mitigate any radiological consequences of sabotage, and prevent and combat related offences. The original CPPNM applied only to nuclear material in international transport. Conference President Dr. Alec Baer said 'All 89 delegations demonstrated real unity of purpose. They put aside some very genuine national concerns in favour of the global interest and the result is a much improved convention that is better suited to addressing the nuclear security challenges we currently face.' The new rules will come into effect once they have been ratified by two-thirds of the 112 States Parties of the Convention, expected to take several years. 'But concrete actions are already taking place around the world. For more than 3 years, the IAEA has been implementing a systematic Nuclear Security plan, including physical protection activities designed to prevent, detect and respond to malicious acts,' said Anita Nillson, Director of the IAEA's Office of Nuclear Security. The Agency's Nuclear Security Fund, set up after the events of 9/11, has delivered $19.5 million in practical assistance to 121 countries

  17. Data on publications, structural analyses, and queries used to build and utilize the AlloRep database.

    Science.gov (United States)

    Sousa, Filipa L; Parente, Daniel J; Hessman, Jacob A; Chazelle, Allen; Teichmann, Sarah A; Swint-Kruse, Liskin

    2016-09-01

    The AlloRep database (www.AlloRep.org) (Sousa et al., 2016) [1] compiles extensive sequence, mutagenesis, and structural information for the LacI/GalR family of transcription regulators. Sequence alignments are presented for >3000 proteins in 45 paralog subfamilies and as a subsampled alignment of the whole family. Phenotypic and biochemical data on almost 6000 mutants have been compiled from an exhaustive search of the literature; citations for these data are included herein. These data include information about oligomerization state, stability, DNA binding and allosteric regulation. Protein structural data for 65 proteins are presented as easily-accessible, residue-contact networks. Finally, this article includes example queries to enable the use of the AlloRep database. See the related article, "AlloRep: a repository of sequence, structural and mutagenesis data for the LacI/GalR transcription regulators" (Sousa et al., 2016) [1]. PMID:27508249

  18. Essentially stronger - 1999 EPCOR annual report

    International Nuclear Information System (INIS)

    The year 1999 has been a year of consolidation for EPCOR Utilities, uniting the the former brands of Edmonton Power, Aquaalta and Eltec under a new single brand, EPCOR, to provide Edmontonians with a safe, high quality and reliable essential service at competitive prices . The company is building for growth by augmenting its product line with natural gas and green power, accessing new capital, proceeding with new projects at various sites, creating EPCOR Power Development Corporation with an ambitious mandate to grow beyond the Utilities traditional service areas. In proof of that, EPCOR Water Services won a strategically important contract in Port Hardy, BC; EPCOR Technologies also has been involved in projects beyond Alberta. As a sign of confidence in the company, the City of Edmonton voted to retain ownership of the company in July. The Utility also managed to win national awards for both safety and environmental practices and is the first utility company to have all its generating plants meet ISO 14001 standards. During 2000 the company will tackle the evolution of industry restructuring , will explore more diverse financial structures to accommodate growth and the increase in demand for services to make sure that EPCOR will be a leading provider of electric power and natural gas services as the era of deregulated competitive electrical services in Alberta begins in 2001. This report provides details of the achievements of the company's business units in 1999, accompanied by a consolidated financial statement

  19. A Human Capital Framework for a Stronger Teacher Workforce. Advancing Teaching--Improving Learning. White Paper

    Science.gov (United States)

    Myung, Jeannie; Martinez, Krissia; Nordstrum, Lee

    2013-01-01

    Building a stronger teacher workforce requires the thoughtful orchestration of multiple processes working together in a human capital system. This white paper presents a framework that can be used to take stock of current efforts to enhance the teacher workforce in school districts or educational organizations, as well as their underlying theories…

  20. Local Cultural Characteristics Database Semantic Retrieval Model Building%地方文化特色数据库语义检索模型构建

    Institute of Scientific and Technical Information of China (English)

    王智刚

    2015-01-01

    Data reflect local cultural characteristics unique to a particular area of cultural information resources, information and intel igence departments have established numerous local characteristics of the resource database. In this paper, the author of the Ministry of Education calis center Phase III Database"Wudang Culture Database", for example, the use of semantic web technology which, by analysis of the semantic gateway key technology to build domain ontology instance, proposed a semantic retrieval technology based on the characteristics of database retrieval model, the final model to achieve the key technologies specifical y addressed.%地方文化特色数据反映了某一特定区域独有的文化信息资源,众多信息情报部门陆续建立富有地方特色的资源数据库。本文中笔者以教育部calis中心三期特色数据库“武当文化特色数据库”建设为例,将语义网技术运用其中,通过分析语义网关键技术,构建领域本体实例,提出了一个基于语义检索技术的特色数据库检索模型,最后对模型实现中的关键技术做了具体阐述。

  1. Building strategies for tsunami scenarios databases to be used in a tsunami early warning decision support system: an application to western Iberia

    Science.gov (United States)

    Tinti, S.; Armigliato, A.; Pagnoni, G.; Zaniboni, F.

    2012-04-01

    One of the most challenging goals that the geo-scientific community is facing after the catastrophic tsunami occurred on December 2004 in the Indian Ocean is to develop the so-called "next generation" Tsunami Early Warning Systems (TEWS). Indeed, the meaning of "next generation" does not refer to the aim of a TEWS, which obviously remains to detect whether a tsunami has been generated or not by a given source and, in the first case, to send proper warnings and/or alerts in a suitable time to all the countries and communities that can be affected by the tsunami. Instead, "next generation" identifies with the development of a Decision Support System (DSS) that, in general terms, relies on 1) an integrated set of seismic, geodetic and marine sensors whose objective is to detect and characterise the possible tsunamigenic sources and to monitor instrumentally the time and space evolution of the generated tsunami, 2) databases of pre-computed numerical tsunami scenarios to be suitably combined based on the information coming from the sensor environment and to be used to forecast the degree of exposition of different coastal places both in the near- and in the far-field, 3) a proper overall (software) system architecture. The EU-FP7 TRIDEC Project aims at developing such a DSS and has selected two test areas in the Euro-Mediterranean region, namely the western Iberian margin and the eastern Mediterranean (Turkish coasts). In this study, we discuss the strategies that are being adopted in TRIDEC to build the databases of pre-computed tsunami scenarios and we show some applications to the western Iberian margin. In particular, two different databases are being populated, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB). The VSDB contains detailed simulations of few selected earthquake-generated tsunamis. The cases provided by the members of the VSDB are computed "real events"; in other words, they represent the unknowns that the TRIDEC

  2. Building confidence in the disposal safety case through scientific excellence: The OECD Nuclear Energy Agency Thermochemical Database

    International Nuclear Information System (INIS)

    The OECD Nuclear Energy Agency (NEA) Thermochemical Database (TDB) is the product of an ongoing cooperative Project to assemble a comprehensive, internally consistent and quality-assured database of chemical elements selected for their relevance to the assessment of disposal safety. Major selection criteria for the inclusion of elements are mobility, radiotoxicity, inventory and half-life. The project is now in its 20th year, and arose from the realization that existing databases lacked internal consistency or were not sufficiently documented to allow the tracing of the original data sources. This resulted in inconsistent results, e.g., from the same code, when using different databases for the same condition. Thus, increased confidence in the data was needed in order to take advantage, unequivocally, of the powerful insights provided by chemical thermodynamics in performing safety analyses. Confidence in the quality and applicability of the selected data is built upon, in the first place, through the adherence to procedures that are firmly established in the scientific community: formalized and traceable expert judgment, critical review by peers and open publication of both data and the process for their selection with possibility of feedback of experience and new insights. All procedures are specified in the Project guidelines that have remained essentially unchanged since the early stages of the Project. The effort so far has resulted in the publication of thermochemical data for eight elements comprising major actinides and fission and activation products. Added values of the Project are that (a) the thermochemical data are applicable to a variety of disposal systems and also for potential applications beyond disposal; (b) the forming of qualified personnel for the purpose of supporting safety assessment, which is an additional gage of confidence in the latter; (c) an efficient use of resources. For these reasons, the NEA TDB has evolved into a reference tool

  3. IP Geolocation Databases: Unreliable?

    OpenAIRE

    Poese, Ingmar; Uhlig, Steve; Kaafar, Mohamed Ali; Donnet, Benoît; Gueye, Bamba

    2011-01-01

    The most widely used technique for IP geolocation con- sists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolo- cation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitation...

  4. Python结合SQL建立地籍数据库的方法%Method of Cadastral Database Building Based on Python and SQL

    Institute of Scientific and Technical Information of China (English)

    陈秀萍; 郭忠明; 吕翠华

    2013-01-01

    在云南省某建制镇地籍数据库建设中,综合利用多种公共软件平台,配合Python编程、SQL查询语句完成了数据建库工作,在降低生产投入、提高效率的同时,使作业员的综合业务能力得到较大提高.%Taking a town of Yunnan province for example, this article used much public software to build the cadastral database, especially by using Python and SQL. It could reduce costs, increase efficiency, and promote operates' comprehensive abilities.

  5. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    Science.gov (United States)

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center.

  6. CoCoTools: open-source software for building connectomes using the CoCoMac anatomical database.

    Science.gov (United States)

    Blumenfeld, Robert S; Bliss, Daniel P; Perez, Fernando; D'Esposito, Mark

    2014-04-01

    Neuroanatomical tracer studies in the nonhuman primate macaque monkey are a valuable resource for cognitive neuroscience research. These data ground theories of cognitive function in anatomy, and with the emergence of graph theoretical analyses in neuroscience, there is high demand for these data to be consolidated into large-scale connection matrices ("macroconnectomes"). Because manual review of the anatomical literature is time consuming and error prone, computational solutions are needed to accomplish this task. Here we describe the "CoCoTools" open-source Python library, which automates collection and integration of macaque connectivity data for visualization and graph theory analysis. CoCoTools both interfaces with the CoCoMac database, which houses a vast amount of annotated tracer results from 100 years (1905-2005) of neuroanatomical research, and implements coordinate-free registration algorithms, which allow studies that use different parcellations of the brain to be translated into a single graph. We show that using CoCoTools to translate all of the data stored in CoCoMac produces graphs with properties consistent with what is known about global brain organization. Moreover, in addition to describing CoCoTools' processing pipeline, we provide worked examples, tutorials, links to on-line documentation, and detailed appendices to aid scientists interested in using CoCoTools to gather and analyze CoCoMac data. PMID:24116839

  7. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    Science.gov (United States)

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. PMID:7703940

  8. Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures

    CERN Document Server

    Ammar, Adel; Obligis, Estelle; Crépon, Michel; Thiria, Sylvie

    2016-01-01

    This article deals with an important aspect of the neural network retrieval of sea surface salinity (SSS) from SMOS brightness temperatures (TBs). The neural network retrieval method is an empirical approach that offers the possibility of being independent from any theoretical emissivity model, during the in-flight phase. A Previous study [1] has proven that this approach is applicable to all pixels on ocean, by designing a set of neural networks with different inputs. The present study focuses on the choice of the learning database and demonstrates that a judicious distribution of the geophysical parameters allows to markedly reduce the systematic regional biases of the retrieved SSS, which are due to the high noise on the TBs. An equalization of the distribution of the geophysical parameters, followed by a new technique for boosting the learning process, makes the regional biases almost disappear for latitudes between 40{\\deg}S and 40{\\deg}N, while the global standard deviation remains between 0.6 psu (at t...

  9. The Grid: Stronger, Bigger, Smarter? Presenting a conceptual framework of power system resilience

    OpenAIRE

    M. Panteli and P. Mancarella

    2015-01-01

    Increasing the resilience of critical power infrastructures to high-impact low-probability events, such as extreme weather phenomena driven by climate change, is of key importance for keeping the lights on. However, what does resilience really mean? Should we build a stronger and bigger grid, or a smarter one? This article discusses a conceptual framework of power system resilience, its key features, and potential enhancement measures.

  10. Challenges Building Online GIS Services to Support Global Biodiversity Mapping and Analysis: Lessons from the Mountain and Plains Database and Informatics project

    Directory of Open Access Journals (Sweden)

    Robert P Guralnick

    2005-01-01

    Full Text Available We argue that distributed mapping and analysis of biodiversity information becoming available on global distributed networks is a lynchpin activity linking together research and development challenges in biodiversity informatics. Online mapping is a core activity because it allows users to visually explore the spatial context of biodiversity information and quickly assemble the datasets needed to ask and answer biodiversity research and management questions. We make the case that a free, online global biodiversity mapping tool utilizing distributed species occurrence records is now within reach and discuss how such a system can be built using existing technology. We also discuss additional challenges and solutions given experiences building a regional distributed GIS tool called MaPSTeDI (Mountain and Plains Spatio-Temporal Database and Informatics Initiative. We focus on solutions to three challenges in particular: Returning result queries in a reasonable amount of time given network limitations; Accessing multiple data sources using different transmission mechanisms; Scaling from a solution for a handful of data providers to hundreds or thousands of providers. We close by discussing the future challenges and potential solutions for integrating analysis tools into distributed mapping applications.

  11. Axion Cosmology with a Stronger QCD in the Early Universe

    OpenAIRE

    Choi, Kiwoon; Kim, Hang Bae; Kim, Jihn E.

    1996-01-01

    We examine in the context of supersymmetric models whether the usual cosmological upper bound on the axion decay constant can be relaxed by assuming a period of stronger QCD in the early universe. By evaluating the axion potential in the early universe and also taking into account the dilaton potential energy, it is argued that a stronger QCD is not useful for raising up the bound.

  12. Axion cosmology with a stronger QCD in the early universe

    Energy Technology Data Exchange (ETDEWEB)

    Choi Kiwoon [Korea Adv. Inst. of Sci. and Technol., Taejon (Korea, Republic of). Phys. Dept.; Kim, H.B. [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Kim, J.E. [Seoul National Univ. (Korea, Republic of). Dept. of Physics

    1997-04-14

    We examine in the context of supersymmetric models whether the usual cosmological upper bound on the axion decay constant can be relaxed by assuming a period of stronger QCD in the early universe. By evaluating the axion potential in the early universe and also taking into account the dilaton potential energy, it is argued that a stronger QCD is not useful for raising the bound. (orig.).

  13. 基于Google Earth软件建立曲流河地质知识库%Building Geological Knowledge Database Based on Google Earth Software

    Institute of Scientific and Technical Information of China (English)

    石书缘; 胡素云; 冯文杰; 刘伟

    2012-01-01

    在对密井网解剖、露头解剖、现代沉积解剖、沉积模拟实验等建立地质知识库方法优缺点分析基础上,利用Google Earth软件测量了一系列曲流河道的基础数据,结合"将今论古"思想,提出了基于Google Earth软件建立曲流河地质知识库的方法。首先,分析了已有方法的优缺点。其次,介绍了Google Earth软件测量曲流河道的方法步骤,并测量了不同地区不同曲率的河道宽度、点坝长度及弧长,得到了一系列测量数据,建立了测量数据表,并结合已有经验公式与测量数据拟合得到的公式综合分析。结果表明,不同环境中曲流河的河道宽度和点坝长度具不同的相关关系,曲流河的地质知识库不能按统一标准考虑。在不同曲率情况下,河道宽度和点坝长度之间的相关关系亦不同,且随着曲率的减少,其相关关系减弱,反映出点坝的发育程度受河流弯曲程度控制。最后,建议采用建立地质模式库的思想建立定量化曲流河地质知识库,用于对储层建模进行有效约束。%Abstract: The existing methods for establishing geological knowledge database include dense well pattern anatomy, outcrop anatomy, modern sedimentation anatomy, sedimentary simulation experiment, and so on. The advantages and disadvantages of these methods are firstly analyzed in this article. Then the method of building geological knowledge database by using the Google Earth software is proposed combined with the basic geological idea of "the present being a key to the past". Google Earth is a virtual globe, map and geographic information program that were created by Keyhole, Inc, a company acquired by Google in 2004. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. The steps and basic principals for measuring the mean- dering channel by Google Earth are introduced, and a series of fundamental data which consist

  14. 英美国家犯罪DNA数据库建设及应用%Discussion on the Building of Britannic Criminal DNA Database and the Application of American DNA Database System

    Institute of Scientific and Technical Information of China (English)

    焦文慧; 宋辉

    2013-01-01

    In the forensic sciences, the forensic DNA database has been established for ten years and made remarkable achievements in practice already. With the advances in DNA analysis technologies, the more DNA database is perfect, the more that effect of DNA information becomes prominent, and the genetic analytic technique is now widely used in paternity test and personal identification. The USA and some Western Europe countries have successfully set up criminal DNA databases backed by PCR-STR technique in the recent years, and they also have enacted correlative laws and regulations for administration of Criminal DNA database in its whole process. Forensic Science Service of England launched the study on the model of criminal DNA database, and has finished this study work in 1995. The projection and construction, technologies and future trends of DNA database in America provide some clues to improve the construction of DNA database.%  DNA技术在刑事侦查、打击犯罪和法庭审判中发挥着重要作用。近几年来,欧美等一些发达国家相继建成了以PCR-STR分型技术为基础的罪犯DNA数据库,并且也已制定了相关的法律法规以规范数据库的整个运作过程。西方发达国家的实际应用情况充分证明了国家犯罪DNA数据库已经成为司法物证信息系统的核心组成部分。英国是世界上DNA建库最早的国家,美国CODIS系统是世界上犯罪DNA数据库建设与应用最成功的,目前CODIS系统已被世界上十余个国家所引用。

  15. Old genes experience stronger translational selection than young genes.

    Science.gov (United States)

    Yin, Hongyan; Ma, Lina; Wang, Guangyu; Li, Mengwei; Zhang, Zhang

    2016-09-15

    Selection on synonymous codon usage for translation efficiency and/or accuracy has been identified as a widespread mechanism in many living organisms. However, it remains unknown whether translational selection associates closely with gene age and acts differentially on genes with different evolutionary ages. To address this issue, here we investigate the strength of translational selection acting on different aged genes in human. Our results show that old genes present stronger translational selection than young genes, demonstrating that translational selection correlates positively with gene age. We further explore the difference of translational selection in duplicates vs. singletons and in housekeeping vs. tissue-specific genes. We find that translational selection acts comparably in old singletons and old duplicates and stronger translational selection in old genes is contributed primarily by housekeeping genes. For young genes, contrastingly, singletons experience stronger translational selection than duplicates, presumably due to redundant function of duplicated genes during their early evolutionary stage. Taken together, our results indicate that translational selection acting on a gene would not be constant during all stages of evolution, associating closely with gene age. PMID:27259662

  16. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  17. Database Driven Web Systems for Education.

    Science.gov (United States)

    Garrison, Steve; Fenton, Ray

    1999-01-01

    Provides technical information on publishing to the Web. Demonstrates some new applications in database publishing. Discusses the difference between static and database-drive Web pages. Reviews failures and successes of a Web database system. Addresses the question of how to build a database-drive Web site, discussing connectivity software, Web…

  18. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  19. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases...

  20. The right of the stronger: The play Sisyphus and critias

    Directory of Open Access Journals (Sweden)

    Jordović Ivan

    2004-01-01

    Full Text Available The Focus of this study is the standpoint of the play Sisyphus and critias the leader of the thirty towards the right of the stronger. this is a question of constant interest in scientific circles, since its answer can serve as the indicator of the influence this famous theory has had. this interest has been encouraged by the fact that critias’ authorship of the play is questionable. however, the question of the author is not of primary importance for this article, because there are some arguments, among some well known ones, which were not considered and which Show that in this satire, regardless of the author and the purpose of this fragment, the right of the stronger is actually non-existant. the first argument to support this theory is that nomosphysis antithesis is nowhere explicitly mentioned although it is the crucial element of the right of the stronger. in addition there is no claim in the play that the exploitation of the strong by the week or by law accrued. the second argument is that despite the incapability of laws to prevent the secret injustice, they and their importance for the human society are depicted in a positive light. it should also be noted that, unlike callicles and glaucon, laws are created to stop the bad and not the good. the third argument is that the invention of religion is accepted as a positive achievement, which finally enables the overcoming of primeval times and lawlessness. the reflection of this argument is a positive characterization of the individual who invented the fear of gods. the fourth argument, which has not been taken into consideration so far is the way the supporters and opponents of lawlessness are described and marked as κακοί and έσξλοί in the satire only physically strong are considered as strong as opposed to callicles, where they are also spiritually superior. intelectually superior in Sisyphus is the inventor of the fear of gods who is also in favor of law and order. the fact

  1. Stronger misdirection in curved than in straight motion

    Directory of Open Access Journals (Sweden)

    Jorge eOtero-Millan

    2011-11-01

    Full Text Available Illusions developed by magicians are a rich and largely untapped source of insight into perception and cognition. Here we show that curved motion, as employed by the magician in a classic sleight of hand trick, generates stronger misdirection than rectilinear motion, and that this difference can be explained by the differential engagement of the smooth pursuit and the saccadic oculomotor systems. This research moreover exemplifies how the magician’s intuitive understanding of the spectator’s mindset can surpass that of the cognitive scientist in specific instances, and that observation-based behavioral insights developed by magicians are worthy of quantitative investigation in the neuroscience laboratory.

  2. Database Technologies for RDF

    Science.gov (United States)

    Das, Souripriya; Srinivasan, Jagannathan

    Efficient and scalable support for RDF/OWL data storage, loading, inferencing and querying, in conjunction with already available support for enterprise level data and operations reliability requirements, can make databases suitable to act as enterprise-level RDF/OWL repository and hence become a viable platform for building semantic applications for the enterprise environments.

  3. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  4. Building Permits, Buidling permits pulled from apprasial database and geocoded to loctions for all types of permits issued, Published in unknown, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Building Permits dataset, was produced all or in part from Published Reports/Deeds information as of unknown. It is described as 'Buidling permits pulled from...

  5. Building a stronger framework of nuclear law. The IAEA's legislative assistance services

    International Nuclear Information System (INIS)

    The IAEA is publishing a Handbook on Nuclear Law which will provide IAEA Member States with a new resource for assessing the adequacy of their national legal frameworks governing the peaceful uses of nuclear energy; and practical guidance for governments in efforts to enhance their laws and regulations, in harmonizing them with internationally recognized standards, and in meeting their obligations under relevant international instruments. The Handbook responds to the growing demand from many national governments for assistance in the development of nuclear legislation and the need to harmonize their own legal and institutional arrangements with international standards. It also presents concise and authoritative instructional materials for teaching professionals (lawyers, scientists, engineers, health and radiation protection workers, government administrators) on the basic elements of a sound framework for managing and regulating nuclear energy. The Handbook is organized into five general parts: Part I provides a general overview of key concepts in the field: nuclear energy law and the legislative process; the regulatory authority; and the fundamental regulatory activities of licensing, inspection and enforcement. Part II deals with radiation protection. Part Ill covers various subjects arising from nuclear and radiation safety: radiation sources, nuclear installations, emergency preparedness and response, mining and milling, transportation, and waste and spent fuel. Part IV addresses the topic of nuclear liability and coverage. Part V moves to non-proliferation and security related subjects: safeguards, export and import controls, and physical protection. The Handbook also reflects and refers to the extensive range of IAEA Safety Standards covering all fields relevant to peaceful nuclear technology

  6. A race we can win. The world can - and must - build a stronger security framework

    International Nuclear Information System (INIS)

    Nuclear proliferation and terrorism represent the single most important threat to global security. Yet fundamental differences of opinion remain on how to deal with this ever growing menace to our survival. Should we opt for diplomacy or force? What are the relative merits of collective versus unilateral action? Is it more effective to pursue a policy of containment or one based on inclusiveness? These are not new questions, by any measure. But they have taken on renewed urgency as nations struggle, both regionally and globally, to cope with an extended array of conflicts, highly sophisticated forms of terrorism, and a growing threat of weapons of mass destruction. In a real sense, we are in a race against time - but it's a race we can win if we work together. The Treaty on the Non-Proliferation of Nuclear Weapons (NPT) remains the global anchor for humanity's efforts to curb nuclear proliferation and move towards nuclear disarmament. There is no doubt that the implementation of the NPT continues to provide important security benefits - by providing assurance that, in the great majority of non-nuclear-weapon States, nuclear energy is not being misused for weapon purposes. The NPT is also the only binding agreement in which all five of the nuclear-weapon States have committed themselves to move forward towards nuclear disarmament. Still, it is clear that recent events have placed the NPT and the regime supporting it under unprecedented stress, exposing some of its inherent limitations and pointing to areas that need to be adjusted. The question is how do we best move ahead to achieve the security we seek

  7. FunctSNP: an R package to link SNPs to functional knowledge and dbAutoMaker: a suite of Perl scripts to build SNP databases

    Directory of Open Access Journals (Sweden)

    Watson-Haigh Nathan S

    2010-06-01

    Full Text Available Abstract Background Whole genome association studies using highly dense single nucleotide polymorphisms (SNPs are a set of methods to identify DNA markers associated with variation in a particular complex trait of interest. One of the main outcomes from these studies is a subset of statistically significant SNPs. Finding the potential biological functions of such SNPs can be an important step towards further use in human and agricultural populations (e.g., for identifying genes related to susceptibility to complex diseases or genes playing key roles in development or performance. The current challenge is that the information holding the clues to SNP functions is distributed across many different databases. Efficient bioinformatics tools are therefore needed to seamlessly integrate up-to-date functional information on SNPs. Many web services have arisen to meet the challenge but most work only within the framework of human medical research. Although we acknowledge the importance of human research, we identify there is a need for SNP annotation tools for other organisms. Description We introduce an R package called FunctSNP, which is the user interface to custom built species-specific databases. The local relational databases contain SNP data together with functional annotations extracted from online resources. FunctSNP provides a unified bioinformatics resource to link SNPs with functional knowledge (e.g., genes, pathways, ontologies. We also introduce dbAutoMaker, a suite of Perl scripts, which can be scheduled to run periodically to automatically create/update the customised SNP databases. We illustrate the use of FunctSNP with a livestock example, but the approach and software tools presented here can be applied also to human and other organisms. Conclusions Finding the potential functional significance of SNPs is important when further using the outcomes from whole genome association studies. FunctSNP is unique in that it is the only R

  8. 基于不规则三角网的水下地形导航数据库构建方法的优化%Optimized method of building underwater terrain navigation database based on triangular irregular network

    Institute of Scientific and Technical Information of China (English)

    王立辉; 高贤志; 梁冰冰; 余乐; 祝雪芬

    2015-01-01

    采用规则格网模型构建地形导航数据库时,存在精度较低以及效率较低的问题。为了优化地形导航数据库构建方法,提出了一种基于不规则三角网的地形导航数据库构建方法。基于分割合并法对源数据点按经纬度坐标进行分割,分别求出每个数据块数据点的凸壳,然后依据改进的凸壳算法逐点加入非凸壳数据点形成子块三角网,用改进的三角网合并算法对相邻的凸壳子块进行合并,完成子三角网的优化合并形成完整的地形导航数据库。仿真结果表明基于不规则三角网的地形导航数据库构建方法具有效率高、精度高、分辨率可调整的优点。%In view that using a regular grid model to build a underwater terrain navigation database has the problems of low accuracy and low efficiency, an optimized method is proposed to build an underwater terrain navigation database based on a triangular irregular network. Convex hulls are calculated for each block of data points with latitude and longitude coordinates by using a divide and conquer algorithm. Then, according to the improved convex hull algorithm, the sub-triangular irregular networks are formed by adding nonconvex hull data points to the convex hulls. Adjacent convex shell blocks are combined by using an improved algorithm for triangulation, and the terrain navigation database is completed by merging and optimizing the sub-triangulations. Simulation results show that building a terrain navigation database using the construction methods associated with a triangular irregular network has such advantages as high efficiency, high accuracy, and the ability to adjust resolution.

  9. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  10. Mechanisms for stronger warming over drier ecoregions observed since 1979

    Science.gov (United States)

    Zhou, Liming; Chen, Haishan; Hua, Wenjian; Dai, Yongjiu; Wei, Nan

    2016-02-01

    Previous research found that the warming rate observed for the period 1979-2012 increases dramatically with decreasing vegetation greenness over land between 50°S and 50°N, with the strongest warming rate seen over the driest regions such as the Sahara desert and the Arabian Peninsula, suggesting warming amplification over deserts. To further this finding, this paper explores possible mechanisms for this amplification by analyzing observations, reanalysis data and historical simulations of global coupled atmosphere-ocean general circulation models. We examine various variables, related to surface radiative forcing, land surface properties, and surface energy and radiation budget, that control the warming patterns in terms of large-scale ecoregions. Our results indicate that desert amplification is likely attributable primarily to enhanced longwave radiative forcing associated with a stronger water vapor feedback over drier ecoregions in response to the positive global-scale greenhouse gas forcing. This warming amplification and associated downward longwave radiation at the surface are reproduced by historical simulations with anthropogenic and natural forcings, but are absent if only natural forcings are considered, pointing to new potential fingerprints of anthropogenic warming. These results suggest a fundamental pattern of global warming over land that depend on the dryness of ecosystems in mid- and low- latitudes, likely reflecting primarily the first order large-scale thermodynamic component of global warming linked to changes in the water and energy cycles over different ecosystems. This finding may have important implications in interpreting global warming patterns and assessing climate change impacts.

  11. An Approach to Building an OWL Ontology Relational Database%OWL本体关系数据库构建方法

    Institute of Scientific and Technical Information of China (English)

    王岁花; 张晓丹; 王越

    2011-01-01

    Along with the increase of ontology types and resources, the structure of ontology is more and more complicated. In order to store various structure types of ontology properly to support efficient ontology queries, this paper puts forward an ontology storage method based on relational databases, which uses a word formation classification method different from the traditional decomposition to store different types of OWL features in two-dimensional tables to solve the complex relationship between the resources and the attribute values, and ensures the integrity of the semantic information. Then, it uses the efficient retrieval and matching speed of relational database management systems, and the SQL language's high degree of deproceduring to retrieve and matching OWL ontology into a relational database, thus compensates for the defect of low efficiency of the query OWL ontology.%随着本体种类和资源的增加,本体的结构越来越复杂,为了合理地存储各种结构类型的本体、支持高效的本体查询,本文提出了一种基于关系数据库的OWL本体存储方法.该方法通过细致考虑OWL的基本元素,采用与传统的本体分解存储模式不同的构词分类方法,将OWL本体中的类、属性、实例、属性特征和属性约束分别存储在一张二维表中,从而解决了资源与属性值之间的复杂关系问题,并保证了OWL本体存储到关系数据库后语义信息的完整性.最后,利用关系数据库管理系统高效的检索和匹配速度以及SQL语言的高度非过程化,将检索和匹配OWL本体转换成检索关系数据库,弥补了OWL本体数据查询效率低的不足之处.

  12. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  13. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  14. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  15. Database theory and SQL practice using Access

    International Nuclear Information System (INIS)

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  16. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May...

  17. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  18. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  19. The VVV Templates Project. Towards an Automated Classification of VVV Light-Curves. I. Building a database of stellar variability in the near-infrared

    CERN Document Server

    Angeloni, R; Catelan, M; Dékány, I; Gran, F; Alonso-García, J; Hempel, M; Navarrete, C; Andrews, H; Aparicio, A; Beamín, J C; Berger, C; Borissova, J; Peña, C Contreras; Cunial, A; de Grijs, R; Espinoza, N; Eyheramendy, S; Lopes, C E Ferreira; Fiaschi, M; Hajdu, G; Han, J; Hełminiak, K G; Hempel, A; Hidalgo, S L; Ita, Y; Jeon, Y -B; Jordán, A; Kwon, J; Lee, J T; Martín, E L; Masetti, N; Matsunaga, N; Milone, A P; Minniti, D; Morelli, L; Murgas, F; Nagayama, T; Navarro, C; Ochner, P; Pérez, P; Pichara, K; Rojas-Arriagada, A; Roquette, J; Saito, R K; Siviero, A; Sohn, J; Sung, H -I; Tamura, M; Tata, R; Tomasella, L; Townsend, B; Whitelock, P

    2014-01-01

    Context. The Vista Variables in the V\\'ia L\\'actea (VVV) ESO Public Survey is a variability survey of the Milky Way bulge and an adjacent section of the disk carried out from 2010 on ESO Visible and Infrared Survey Telescope for Astronomy (VISTA). VVV will eventually deliver a deep near-IR atlas with photometry and positions in five passbands (ZYJHK_S) and a catalogue of 1-10 million variable point sources - mostly unknown - which require classifications. Aims. The main goal of the VVV Templates Project, that we introduce in this work, is to develop and test the machine-learning algorithms for the automated classification of the VVV light-curves. As VVV is the first massive, multi-epoch survey of stellar variability in the near-infrared, the template light-curves that are required for training the classification algorithms are not available. In the first paper of the series we describe the construction of this comprehensive database of infrared stellar variability. Methods. First we performed a systematic sea...

  20. Negative density dependence is stronger in resource-rich environments and diversifies communities when stronger for common but not rare species.

    Science.gov (United States)

    LaManna, Joseph A; Walton, Maranda L; Turner, Benjamin L; Myers, Jonathan A

    2016-06-01

    Conspecific negative density dependence is thought to maintain diversity by limiting abundances of common species. Yet the extent to which this mechanism can explain patterns of species diversity across environmental gradients is largely unknown. We examined density-dependent recruitment of seedlings and saplings and changes in local species diversity across a soil-resource gradient for 38 woody-plant species in a temperate forest. At both life stages, the strength of negative density dependence increased with resource availability, becoming relatively stronger for rare species during seedling recruitment, but stronger for common species during sapling recruitment. Moreover, negative density dependence appeared to reduce diversity when stronger for rare than common species, but increase diversity when stronger for common species. Our results suggest that negative density dependence is stronger in resource-rich environments and can either decrease or maintain diversity depending on its relative strength among common and rare species.

  1. Negative density dependence is stronger in resource-rich environments and diversifies communities when stronger for common but not rare species.

    Science.gov (United States)

    LaManna, Joseph A; Walton, Maranda L; Turner, Benjamin L; Myers, Jonathan A

    2016-06-01

    Conspecific negative density dependence is thought to maintain diversity by limiting abundances of common species. Yet the extent to which this mechanism can explain patterns of species diversity across environmental gradients is largely unknown. We examined density-dependent recruitment of seedlings and saplings and changes in local species diversity across a soil-resource gradient for 38 woody-plant species in a temperate forest. At both life stages, the strength of negative density dependence increased with resource availability, becoming relatively stronger for rare species during seedling recruitment, but stronger for common species during sapling recruitment. Moreover, negative density dependence appeared to reduce diversity when stronger for rare than common species, but increase diversity when stronger for common species. Our results suggest that negative density dependence is stronger in resource-rich environments and can either decrease or maintain diversity depending on its relative strength among common and rare species. PMID:27111545

  2. A Language for Fuzzy Statistical Database

    OpenAIRE

    Katti, C. P; S.Guglani

    2013-01-01

    Fuzzy statistical database is a database used for fuzzy statistical analysis purpose. A fuzzy statistical tableis a tabular representation of fuzzy statistics and is a useful data structure for fuzzy statistical database.Primitive fuzzy statistical tables are a building block of fuzzy statistical table. In this paper we defined thefuzzy statistical join operator in the framework of fuzzy statistical database. The fuzzy statisticaldependency preservation property will be discussed for the fuzz...

  3. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  4. Ministers at IAEA Conference Call for Stronger Nuclear Security

    International Nuclear Information System (INIS)

    Declaration says. The Declaration recognizes the threat to international security posed by theft and smuggling of nuclear material and affirms the responsibility of States to keep all nuclear material secure. It also encourages all States to join and participate in the IAEA Incident and Trafficking Database, the international repository of information about nuclear and other radioactive material that has fallen out of regulatory control. It invites States that have not yet done so to become party to, and fully implement, the Convention on the Physical Protection of Nuclear Material (CPPNM) and its 2005 Amendment, which broadens the scope of that Convention. Many ministers at the Conference stated that entry into force of the Amendment would make a big difference. Among a number of other issues that are addressed, the Declaration also encourages States to use, on a voluntary basis, the IAEA's nuclear security advisory services and peer reviews such as International Physical Protection Advisory Service (IPPAS) missions, which are based on internationally accepted guidance and tailored to national needs. The Ministers welcomed the IAEA's work in nuclear forensics, and recognized its efforts to raise awareness of the growing threat of cyber-attacks and their potential impact on nuclear security. The work of the Conference will contribute to the IAEA's Nuclear Security Plan for 2014 to 2017. Consultations on the Declaration among IAEA Member States were coordinated by Ambassador Balazs Csuday, Resident Representative of Hungary, and Ambassador Laercio Antonio Vinhas, Resident Representative of Brazil. (IAEA)

  5. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  6. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out, ...

  7. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  8. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  9. Trade Capacity Building Database Data Set

    Data.gov (United States)

    US Agency for International Development — Since 2001, the U.S. Agency for International Development (USAID) has conducted an annual survey on behalf of the Office of the U.S. Trade Representative (USTR) to...

  10. Performance related issues in distributed database systems

    Science.gov (United States)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  11. Metadata queries for complex database systems

    OpenAIRE

    O'Connor, Gerald

    2004-01-01

    Federated Database Management Systems (FDBS) are very complex. Component databases can be heterogeneous, autonomous and distributed, accounting for these different characteristics in building a FDBS is a difficult engineering problem. The Common Data Model (CDM) is what is used to represent the data in the FDBS. It must be semantically rich to correctly represent the data from diverse component databases which differ in structure, datamodel, semantics and content. In this research project we ...

  12. Search Algorithms for Conceptual Graph Databases

    OpenAIRE

    Abdurashid Mamadolimov

    2012-01-01

    We consider a database composed of a set of conceptual graphs. Using conceptual graphs and graph homomorphism it is possible to build a basic query-answering mechanism based on semantic search. Graph homomorphism defines a partial order over conceptual graphs. Since graph homomorphism checking is an NP-Complete problem, the main requirement for database organizing and managing algorithms is to reduce the number of homomorphism checks. Searching is a basic operation for database manipulating p...

  13. The impact of gambling advertising: Problem gamblers report stronger impacts on involvement, knowledge, and awareness than recreational gamblers.

    Science.gov (United States)

    Hanss, Daniel; Mentzoni, Rune A; Griffiths, Mark D; Pallesen, Ståle

    2015-06-01

    Although there is a general lack of empirical evidence that advertising influences gambling participation, the regulation of gambling advertising is hotly debated among academic researchers, treatment specialists, lobby groups, regulators, and policymakers. This study contributes to the ongoing debate by investigating perceived impacts of gambling advertising in a sample of gamblers drawn from the general population in Norway (n = 6,034). Three dimensions of advertising impacts were identified, representing perceived impacts on (a) gambling-related attitudes, interest, and behavior ("involvement"); (b) knowledge about gambling options and providers ("knowledge"); and (c) the degree to which people are aware of gambling advertising ("awareness"). Overall, impacts were strongest for the knowledge dimension, and, for all 3 dimensions, the impact increased with level of advertising exposure. Those identified as problem gamblers in the sample (n = 57) reported advertising impacts concerning involvement more than recreational gamblers, and this finding was not attributable to differences in advertising exposure. Additionally, younger gamblers reported stronger impacts on involvement and knowledge but were less likely to agree that they were aware of gambling advertising than older gamblers. Male gamblers were more likely than female gamblers to report stronger impacts on both involvement and knowledge. These findings are discussed with regard to existing research on gambling advertising as well as their implications for future research and policy-making. (PsycINFO Database Record

  14. The impact of gambling advertising: Problem gamblers report stronger impacts on involvement, knowledge, and awareness than recreational gamblers.

    Science.gov (United States)

    Hanss, Daniel; Mentzoni, Rune A; Griffiths, Mark D; Pallesen, Ståle

    2015-06-01

    Although there is a general lack of empirical evidence that advertising influences gambling participation, the regulation of gambling advertising is hotly debated among academic researchers, treatment specialists, lobby groups, regulators, and policymakers. This study contributes to the ongoing debate by investigating perceived impacts of gambling advertising in a sample of gamblers drawn from the general population in Norway (n = 6,034). Three dimensions of advertising impacts were identified, representing perceived impacts on (a) gambling-related attitudes, interest, and behavior ("involvement"); (b) knowledge about gambling options and providers ("knowledge"); and (c) the degree to which people are aware of gambling advertising ("awareness"). Overall, impacts were strongest for the knowledge dimension, and, for all 3 dimensions, the impact increased with level of advertising exposure. Those identified as problem gamblers in the sample (n = 57) reported advertising impacts concerning involvement more than recreational gamblers, and this finding was not attributable to differences in advertising exposure. Additionally, younger gamblers reported stronger impacts on involvement and knowledge but were less likely to agree that they were aware of gambling advertising than older gamblers. Male gamblers were more likely than female gamblers to report stronger impacts on both involvement and knowledge. These findings are discussed with regard to existing research on gambling advertising as well as their implications for future research and policy-making. (PsycINFO Database Record PMID:25730628

  15. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  16. Classification of Building Object Types

    DEFF Research Database (Denmark)

    Jørgensen, Kaj Asbjørn

    2011-01-01

    be managed by software applications and on the basis of building models. Classification systems with taxonomies of building object types have many application opportunities but can still be beneficial in data exchange between building construction partners. However, this will be performed by new methods...... and in strong connection with databases holding a wide range of object types....

  17. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  18. Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot

    Science.gov (United States)

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2016-01-01

    Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot’s body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI’s potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject’s own body. PMID:27654174

  19. Method discussion for quick response grey prediction of stronger aftershocks of an earthquake sequence

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    In this paper, we take occurrence process of early strong aftershocks of a main-after shock type′s earthquake sequence as a complex grey system, and introduce predicting method for its stronger aftershocks by grey predicting theory. Through inspection prediction for 1998 Zhangbei MS=6.2 earthquake sequence, it shows that the grey predicting method maybe has active significance for the investigation of quick response prediction problems of stronger aftershocks of an earthquake sequence.

  20. The SIMBAD astronomical database

    CERN Document Server

    Wenger, M; Egret, D; Dubois, P; Bonnarel, F; Borde, S; Genova, F; Jasniewicz, G; Laloe, S; Lesteven, S; Monier, R; Wenger, Marc; Ochsenbein, Francois; Egret, Daniel; Dubois, Pascal; Bonnarel, Francois; Borde, Suzanne; Genova, Francoise; Jasniewicz, Gerard; Laloe, Suzanne; Lesteven, Soizick; Monier, Richard

    2000-01-01

    Simbad is the reference database for identification and bibliography ofastronomical objects. It contains identifications, `basic data', bibliography,and selected observational measurements for several million astronomicalobjects. Simbad is developed and maintained by CDS, Strasbourg. Building thedatabase contents is achieved with the help of several contributing institutes.Scanning the bibliography is the result of the collaboration of CDS withbibliographers in Observatoire de Paris (DASGAL), Institut d'Astrophysique deParis, and Observatoire de Bordeaux. When selecting catalogues and tables forinclusion, priority is given to optimal multi-wavelength coverage of thedatabase, and to support of research developments linked to large projects. Inparallel, the systematic scanning of the bibliography reflects the diversityand general trends of astronomical research. A WWW interface to Simbad is available at: http://simbad.u-strasbg.fr/Simbad

  1. The Life Support Database system

    Science.gov (United States)

    Likens, William C.

    1991-01-01

    The design and implementation of the database system are described with specific reference to data available from the Build-1 version and techniques for its utilization. The review of the initial documents for the Life Support Database is described in terms of title format and sequencing, and the users are defined as participants in NASA-sponsored life-support research. The software and hardware selections are based respectively on referential integrity and compatibility, and the implementation of the user interface is achieved by means of an applications-programming tool. The current Beta-Test implementation of the system includes several thousand acronyms and bibliographic references as well as chemical properties and exposure limits, equipment, construction materials, and mission data. In spite of modifications in the database the system is found to be effective and a potentially significant resource for the aerospace community.

  2. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  3. Unit 66 - Database Creation

    OpenAIRE

    Unit 61, CC in GIS; National Center for Geographic Information and Analysis (UC Santa Barbara, SUNY at Buffalo, University of Maine)

    1990-01-01

    This unit examines the planning and management issues involved in the physical creation of the database. It describes some issues in database creation, key hardware parameters of the system, partitioning the database for tiles and layers and converting data for the database. It illustrates these through an example from the Flathead National Forest in northwestern Montana, where a resource management database was required.

  4. NIST Databases on Atomic Spectra

    Science.gov (United States)

    Reader, J.; Wiese, W. L.; Martin, W. C.; Musgrove, A.; Fuhr, J. R.

    2002-11-01

    The NIST atomic and molecular spectroscopic databases now available on the World Wide Web through the NIST Physics Laboratory homepage include Atomic Spectra Database, Ground Levels and Ionization Energies for the Neutral Atoms, Spectrum of Platinum Lamp for Ultraviolet Spectrograph Calibration, Bibliographic Database on Atomic Transition Probabilities, Bibliographic Database on Atomic Spectral Line Broadening, and Electron-Impact Ionization Cross Section Database. The Atomic Spectra Database (ASD) [1] offers evaluated data on energy levels, wavelengths, and transition probabilities for atoms and atomic ions. Data are given for some 950 spectra and 70,000 energy levels. About 91,000 spectral lines are included, with transition probabilities for about half of these. Additional data resulting from our ongoing critical compilations will be included in successive new versions of ASD. We plan to include, for example, our recently published data for some 16,000 transitions covering most ions of the iron-group elements, as well as Cu, Kr, and Mo [2]. Our compilations benefit greatly from experimental and theoretical atomic-data research being carried out in the NIST Atomic Physics Division. A new compilation covering spectra of the rare gases in all stages of ionization, for example, revealed a need for improved data in the infrared. We have thus measured these needed data with our high-resolution Fourier transform spectrometer [3]. An upcoming new database will give wavelengths and intensities for the stronger lines of all neutral and singly-ionized atoms, along with energy levels and transition probabilities for the persistent lines [4]. A critical compilation of the transition probabilities of Ba I and Ba II [5] has been completed and several other compilations of atomic transition probabilities are nearing completion. These include data for all spectra of Na, Mg, Al, and Si [6]. Newly compiled data for selected ions of Ne, Mg, Si and S, will form the basis for a new

  5. World Religion Database

    OpenAIRE

    Dekker, Jennifer

    2009-01-01

    This article reviews the new database released by Brill entitled World Religion Database (WRD). It compares WRD to other religious demography tools available and rates the database on a 5 point scale.

  6. Development of two Danish building typologies for residential buildings

    DEFF Research Database (Denmark)

    Kragh, Jesper; Wittchen, Kim Bjarne

    2014-01-01

    performance of the residential building stock. Overall, the typologies consist of two types of building models—real example models and average designed models. The main purpose of developing the building typologies was to establish a tool able to calculate different energy-saving scenarios for the entire...... residential building stock. To make such calculations of scenarios, similar average designed building models were established based on extracted average values from the Danish Energy Performance Certification Scheme database. The two building typologies had the same overall composition, i.e., three main...... building types: single-family houses, terraced houses and blocks of flats. Each main building type is presented for nine periods representing age, typical building tradition and insulation levels. Finally, an energy balance model of the residential building stock was devised to validate the average...

  7. Beyond Bradley and Behrendt: Building a stronger evidence-base about Indigenous pathways and transitions into higher education

    Directory of Open Access Journals (Sweden)

    Jack Frawley

    2015-10-01

    Full Text Available Successive Australian governments have addressed the issue of social inclusion and equity in higher education in a number of policies and reviews, the most recent being the Review of Australian Higher Education, the Bradley Review (Bradley et al. 2008; and the Review of Higher Education Access and Outcomes for Aboriginal and Torres Strait Islander People, the Behrendt Review (Behrendt et al. 2012. The Bradley Review noted that although there had been success in areas of gender inequity in higher education, students from regional and remote areas, Indigenous students and those from low SES backgrounds were still seriously under-represented. The Bradley Review also found that the major barriers to the participation of students from low SES backgrounds were educational attainment, lower awareness of the long term benefits of higher education, less aspiration to participate, and the potential need for extra financial, academic or personal support once enrolled. As a result of the Bradley Review the Australian Government’s policy Transforming Australia’s Higher Education System announced two targets for the higher education sector: that by 2020, 20% of undergraduate university students should be from low socio-economic backgrounds; and, that by 2025, 40% of 25-34 year olds should hold a bachelor degree. To support this policy, the Higher Education Participation and Partnerships Program (now rebadged Higher Education Participation Program (HEPP initiative came into being, with the participation component offering universities financial incentives to enroll and retain students from low SES backgrounds; and the partnerships component providing funding to raise student aspirations for higher education and working in partnership with other education institutions to do this (Gale & Parker 2013.

  8. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  9. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  10. Visual Attention Modelling for Subjective Image Quality Databases

    OpenAIRE

    Engelke, Ulrich; Maeder, Anthony; Zepernick, Hans-Jürgen

    2009-01-01

    The modelling of perceptual image quality metrics has experienced increased effort in recent years. In order to allow for model design, validation, and comparison, a number of subjective image quality databases has been made available to the research community. Most metrics that were designed using these databases assess the quality uniformly over the whole image, not taking into account stronger attention to salient regions of an image. In order to facilitate incorporation of visual attentio...

  11. Stronger Accent Following a Stroke: The Case of a Trilingual with Aphasia

    Science.gov (United States)

    Levy, Erika S.; Goral, Mira; De Diesbach, Catharine Castelluccio; Law, Franzo, II

    2011-01-01

    This study documents patterns of change in speech production in a multilingual with aphasia following a cerebrovascular accident (CVA). EC, a right-handed Hebrew-English-French trilingual man, had a left fronto-temporo-parietal CVA, after which he reported that his (native) Hebrew accent became stronger in his (second language) English. Recordings…

  12. Peptide-MHC class I stability is a stronger predictor of CTL immunogenicity than peptide affinity

    DEFF Research Database (Denmark)

    Harndahl, Mikkel Nors; Rasmussen, Michael; Nielsen, Morten;

    2012-01-01

    Peptide-MHC class I stability is a stronger predictor of CTL immunogenicity than peptide affinity Mikkel Harndahla, Michael Rasmussena, Morten Nielsenb, Soren Buusa,∗ a Laboratory of Experimental Immunology, Faculty of Health Sciences, University of Copenhagen, Denmark b Center for Biological Seq...... al., 2007. J. Immunol. 178, 7890–7901. doi:10.1016/j.molimm.2012.02.025...

  13. A stronger patch test elicitation reaction to the allergen hydroxycitronellal plus the irritant sodium lauryl sulfate

    DEFF Research Database (Denmark)

    Heydorn, S; Andersen, Klaus Ejner; Johansen, Jeanne Duus;

    2003-01-01

    Household and cleaning products often contain both allergens and irritants. The aim of this double-blinded, randomized, paired study was to determine whether patch testing with an allergen (hydroxycitronellal) combined with an irritant [sodium lauryl sulfate (SLS)] cause a stronger patch test...

  14. KALIMER database development

    International Nuclear Information System (INIS)

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  15. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  16. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  17. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  18. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL

    2010-08-01

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regarding Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.

  19. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  20. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell

    2011-08-01

    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  1. Routing Protocols for Transmitting Large Databases or Multi-databases Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Most knowledgeable people agree that networking and routingtechnologi es have been around about 25 years. Routing is simultaneously the most complicat ed function of a network and the most important. It is of the same kind that mor e than 70% of computer application fields are MIS applications. So the challenge in building and using a MIS in the network is developing the means to find, acc ess, and communicate large databases or multi-databases systems. Because genera l databases are not time continuous, in fact, they can not be streaming, so we ca n't obtain reliable and secure quality of service by deleting some unimportant d atagrams in the databases transmission. In this article, we will discuss which k ind of routing protocol is the best type for large databases or multi-databases systems transmission in the networks.

  2. 基于感知质量的科技文献数据库网站信息用户满意模型研究%Building Satisfaction Model for Information Users Based on Perceived Quality of Academic Database Websites

    Institute of Scientific and Technical Information of China (English)

    李莉; 甘利人; 谢兆霞

    2009-01-01

    In the network era, how to examine the effectiveness and acceptability of website at the point of user has been concerned by industry and theoretical circles. But as academic database website which provides information products and services, the research on information user satisfaction for academic database website is only developing at the beginning. This paper explores the perceived quality of information user, identifies the key dimensions of information products and services quality provided by academic database website, and develops the assessment model of information user satisfaction. Then following several hypotheses proposed based on the framework, more attention is given to the building structural equation models by using Partial Least Square Method and hypotheses testing on the basis of the samples. As a result, the key dimensions of perceived quality and other factors are examined. Information user satisfaction model examined can be widely used in academic database website, with a view to academic database website to provide guidance in the management decisions.%在网络环境下,如何从用户视角研究网站使用效果和可接受程度,已经成为业界和理论界普遍关注的问题,但作为提供信息产品和服务的科技文献数据库网站,其用户满意研究的相关理论与实践还处于起步阶段.本文对信息用户的感知质量进行了探索性研究,根据样本调查、专家访谈结果识别了科技文献数据库网站所提供的信息产品和信息服务的质量

  3. A stronger entanglement monogamy inequality in a 2x2x3 system

    International Nuclear Information System (INIS)

    In this paper, we prove a stronger entanglement monogamy inequality in a 2x2x3 system's pure state |Ψ)ABC. Specifically, we show that the linear entropy of ρA, which is the entanglement between A and BC, is always larger than the sum of the square of concurrence between A and B and the square of concurrence of assistance between A and C. Our proof is based on direct generalizations of the qubit system's results. Our inequality is stronger than the known monogamy inequality of concurrence and shows that the entanglement of assistance always comes from the existing entanglement. However, our inequality also shows that unlike the three-qubit case, in higher dimensional systems the entanglement between A and BC cannot be completely transformed into bipartite entanglement with assistance. Through our proof, we also give some cases when the inequality reduces to an equality.

  4. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum

    OpenAIRE

    Howard, Philip N.; Kollanyi, Bence

    2016-01-01

    Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeti...

  5. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated...

  6. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  7. AIDSinfo Drug Database

    Science.gov (United States)

    ... Widgets Order Publications Skip Nav AIDS info Drug Database Home > Drugs Español small medium large Text Size ... health care providers and patients. Search the Drug Database Help × Search by drug name Performs a search ...

  8. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  9. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  10. Web database development

    OpenAIRE

    Tsardas, Nikolaos A.

    2001-01-01

    This thesis explores the concept of Web Database Development using Active Server Pages (ASP) and Java Server Pages (JSP). These are among the leading technologies in the web database development. The focus of this thesis was to analyze and compare the ASP and JSP technologies, exposing their capabilities, limitations, and differences between them. Specifically, issues related to back-end connectivity using Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC), application ar...

  11. Refactoring of a Database

    OpenAIRE

    Dsousa, Ayeesha; Bhatia, Shalini

    2009-01-01

    The technique of database refactoring is all about applying disciplined and controlled techniques to change an existing database schema. The problem is to successfully create a Database Refactoring Framework for databases. This paper concentrates on the feasibility of adapting this concept to work as a generic template. To retain the constraints regardless of the modifications to the metadata, the paper proposes a MetaData Manipulation Tool to facilitate change. The tool adopts a Template Des...

  12. Scopus database: a review

    OpenAIRE

    Burnham, Judy F.

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  13. Extracting Schema from an OEM Database

    Institute of Scientific and Technical Information of China (English)

    沈一栋

    1998-01-01

    While the schema-less feature of the OEM(Object Exchange Modl)gives flexibility in representing semi-structured data,it brings difficulty in formulating database queries. Extracting schema from an OEM database then becomes an important research topic.This paper presents a new approach to this topic with th following reatures.(1)In addition to representing th nested label structure of an OEM database,the proposed OEM schema keeps up-tp-date information about instance objects of the database,The object-level information is useful in speeding up query evaluation.(2)The OEM schema is explicitly represented as a label-set,which is easy to construct and update.(3)The OEM schema of a database is statically built and dynamically updated.The time complexity of building the OEM schems is linear in the size of the OEM database.(4)The approach is applicable to a wide range of areas where the underlying schema is much smaller than the database itself(e.g.data warehouses that are made from a set of heterogeneous databases).

  14. Automated Oracle database testing

    CERN Document Server

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  15. CTD_DATABASE - Cascadia tsunami deposit database

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have...

  16. Scaling up ATLAS Database Release Technology for the LHC Long Run

    Science.gov (United States)

    Borodin, M.; Nevski, P.; Vaniachine, A.; ATLAS Collaboration

    2011-12-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the "live" Oracle server. Database Release technology fully satisfies the requirements of ALLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  17. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  18. Plant and animal communities along the Swedish Baltic Sea coast - the building of a database of quantitative data collected by SCUBA divers, its use and some GIS applications in the Graesoe area

    Energy Technology Data Exchange (ETDEWEB)

    Sandman, Antonia; Kautsky, Hans [Stockholm Univ. (Sweden). Dept. of Systems Ecology

    2005-03-01

    The aim of the project was to compile a single database with quantitative data collected by SCUBA divers from the whole Swedish Baltic Sea coast. Data of plant and animal biomass, together with position, depth and type of substrate from 19 areas along the Swedish coast from the county of Blekinge to Kalix in the Bothnian Bay were compiled in a single database. In all, the database contains 2,170 records (samples) from 179 different stations where in total 161 plant and 145 animal species have been found. The data were then illustrated by the geographical distribution of plant and animal biomass and by constructing a model to estimate future changes of the plant and animal communities in the Graesoe area in the Aaland Sea applying GIS-techniques. To illustrate the opportunities of the database the change of the composition of benthic plant and animal biomass with salinity was calculated. The proportion of marine species increased with increasing salinity and the benthic biomass was at its highest in the southern Baltic proper. Quantitative data from Grepen and the Graesoe-Singoe area were used to calculate present biomass in the Graesoe area. A scenario of the change in biomass distribution and total biomass caused by shore displacement was created using data from Raaneaa and Kalix in the Bothnian Bay. To map the biomass distribution the material was divided into different depth intervals. The change of biomass with time was calculated as a function of salinity change and reduction of the available area, caused by shore displacement. The total biomass for all plants and animals in the investigated area was 50,500 tonnes at present. In 2,000 years the total biomass will be 25,000 tonnes and in 4,000 years 3,600 tonnes due to shore displacement causing a decrease in both salinity and available substrate.To make an estimate of the species distribution and a rough estimate of their biomass in an unknown geographic area, the type of substrate, the depth and the wave

  19. A technique on automatic land-use database reconstruction based on scanning land-use map

    Institute of Scientific and Technical Information of China (English)

    LI; Xiaojuan; GONG; Huili; YIN; Lianwang; SUN; Yonghua; YANG; Lingli; WANG; Yanggang

    2006-01-01

    Although a land-cover database is very important to national land use including urban planning and land-use management, it is very laborious and time-consuming to build through digitization of paper land-use maps (1:10000) and data input by hand. Here we propose a new, high-level, automatic technique to build a land-use database, which has proved useful and practical in building a land-use database of Baotou City.

  20. Nuclear power economic database

    International Nuclear Information System (INIS)

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  1. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  2. Big Data Analytics of City Wide Building Energy Declarations

    OpenAIRE

    Ma, Yixiao

    2015-01-01

    This thesis explores the building energy performance of the domestic sector in the city of Stockholm based on the building energy declaration database. The aims of this master thesis are to analyze the big data sets of around 20,000 buildings in Stockholm region, explore the correlation between building energy performance and different internal and external affecting factors on building energy consumption, such as building energy systems, building vintages and etc. By using clustering method,...

  3. The Influence of Building Distributed Archival Database Sys-tem on Archival Com-pilation%分布式档案数据库系统的建立及其对档案编研的影响

    Institute of Scientific and Technical Information of China (English)

    郑慧; 覃筱媚

    2014-01-01

    At the beginning of twenty- first Century, archival compilation under the network be-came one of the hot issues.How-ever, people paid little attention on the distributed archival data-base system applying on the ar-chive compilation. By setting the audience module, archival original database module, the experts module, editing module, the dis-tributed archival database system can have a positive impact on the archival compilation work, archival compilation products dissemina-tion and the open range and it will play an important role on the fu-ture of the archival compilation.%21世纪初,网络环境下的档案编研就成为人们关注的热点问题之一。然而,人们对将分布式档案数据库系统应用于档案编研缺乏足够的关注。分布式档案数据库系统通过分别设置受众模块、档案原文数据库模块、专家模块、编研模块,对档案编研工作环节、档案编研成果传播、档案编研成果开放范围均产生一定积极影响,将对未来的档案编研工作起到重要作用。

  4. Organizing a breast cancer database: data management.

    Science.gov (United States)

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  5. Representations built from a true geographic database

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    The development of a system for geovisualisation under the Centre for 3D GeoInformation at Aalborg University, Denmark, has exposed the need for a rethinking of the representation of virtual environments. Now that almost everything is possible (due to technological advances in computer graphics...... a representation based on geographic and geospatial principles. The system GRIFINOR, developed at 3DGI, Aalborg University, DK, is capable of creating this object-orientation and furthermore does this on top of a true Geographic database. A true Geographic database can be characterized as a database that can cover...... the whole world in 3d and with a spatial reference given by geographic coordinates. Built on top of this is a customised viewer, based on the Xith(Java) scenegraph. The viewer reads the objects directly from the database and solves the question about Level-Of-Detail on buildings, orientation in relation...

  6. Organizing a breast cancer database: data management.

    Science.gov (United States)

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application. PMID:27197511

  7. Database for foundry engineers – simulationDB – a modern database storing simulation results

    Directory of Open Access Journals (Sweden)

    P. Malinowski

    2010-11-01

    Full Text Available Purpose: of this paper The main aim of this paper is to build specific database system for collecting, analysing and searching simulation results.Design/methodology/approach: It was prepared using client-server architecture. Then was prepared GUI - Graphical User Interface.Findings: New database system for foundry was discovered.Practical implications: System development is in progress and practical implication will be hold in one of iron foundry in next year.Originality/value: The original value of this paper is innovative database system for storing and analysing simulation results.

  8. 某项目测量数据库的建立与应用%Measurement database build up and its application in A1 project in the North China exploration area

    Institute of Scientific and Technical Information of China (English)

    刘素花; 满雪峰; 于九申; 刘先玲; 庞福建

    2012-01-01

    对测量数据资料进行科学化、规范化、可视化的管理,可以及时、准确、直观地为管理部门和勘探设计人员提供以往探区资料的分布和现状。本文详细地介绍了某探区物探测量数据库的数据来源、建立方法以及数据库建成后在勘探部署、工程设计和联合处理等方面的应用。%Under scientific, standardized and visual management, surveying data can be timely, accurately and intuitively to provide the distribution and status of the previous exploration area. This paper introduced the data source of the surveying database and how to set up the database in detail. In addition, the paper described its practical application in exploration deployment, project design and cooperation processing in the North China exploration area.

  9. [Biomechanical characteristics of human fetal membranes. Preterm fetal membranes are stronger than term fetal membranes].

    Science.gov (United States)

    Rangaswamy, N; Abdelrahim, A; Moore, R M; Uyen, L; Mercer, B M; Mansour, J M; Kumar, D; Sawady, J; Moore, J J

    2011-06-01

    The purpose of this study was to determine the biomechanical characteristics of human fetal membranes (FM) throughout gestation. Biomechanical properties were determined for 115 FM of 23-41 weeks gestation using our previously described methodology. The areas of membrane immediately adjacent to the strongest and weakest tested spots were sampled for histomorphometric analysis. Clinical data on the patients whose FM were examined were also collected. FM less than 28 weeks gestation were associated with higher incidence of abruption and chorioamnionitis. Topographically FM at all gestations had heterogeneous biomechanical characteristics over their surfaces with distinct weak areas. The most premature membranes were the strongest. FM strength represented by rupture force and work to rupture decreased with increasing gestation in both weak and strong regions of FM. This decrease in FM strength was most dramatic at more than 38 weeks gestation. The FM component amnion-chorion sublayers were thinner in the weak areas compared to strong areas. Compared to term FM, preterm FM are stronger but have similar heterogeneous weak and strong areas. Following a gradual increase in FM weakness with increasing gestation, there is a major drop-off at term 38 weeks gestation. The FM weak areas are thinner than the stronger areas. Whether the difference in thickness is enough to account for the strength differences is unknown.

  10. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  11. Genome Statute and Legislation Database

    Science.gov (United States)

    ... Database Welcome to the Genome Statute and Legislation Database The Genome Statute and Legislation Database is comprised ... the National Society of Genetic Counselors . Search the Database Search Tips You may select one or more ...

  12. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  13. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  14. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  15. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  16. Database management systems

    CERN Document Server

    Pallaw, Vijay Krishna

    2010-01-01

    The text covers the fundamental concept and a complete guide to the prac- tical implementation of Database Management Systems. Concepts includes SQL, PL/SQL. These concepts include aspects of Database design, Data- base Languages, and Database System implementation. The entire book is divided into five units to ensure the smooth flow of the subject. The extra methodology makes it very useful for students as well as teachers.

  17. Web Technologies And Databases

    OpenAIRE

    Irina-Nicoleta Odoraba

    2011-01-01

    The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS) - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server. Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications...

  18. Database Application Schema Forensics

    OpenAIRE

    Hector Quintus Beyers; Olivier, Martin S; Hancke, Gerhard P.

    2014-01-01

    The application schema layer of a Database Management System (DBMS) can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic ...

  19. An organic database system

    OpenAIRE

    Kersten, Martin; Siebes, Arno

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, ground breaking results are obtained in Quantum- and DNA-computing using nature as inspiration for its computational models. This paper provides a vision on a new brand of database architectures, i.e....

  20. Categorical Database Generalization

    Institute of Scientific and Technical Information of China (English)

    LIU Yaolin; Martin Molenaar; AI Tinghua; LIU Yanfang

    2003-01-01

    This paper focuses on the issues of categorical database gen-eralization and emphasizes the roles ofsupporting data model, integrated datamodel, spatial analysis and semanticanalysis in database generalization.The framework contents of categoricaldatabase generalization transformationare defined. This paper presents an in-tegrated spatial supporting data struc-ture, a semantic supporting model andsimilarity model for the categorical da-tabase generalization. The concept oftransformation unit is proposed in generalization.

  1. Nuclear Science References Database

    OpenAIRE

    PRITYCHENKO B.; Běták, E.; B. Singh; Totans, J.

    2013-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...

  2. Fingerprint databases for theorems

    OpenAIRE

    Billey, Sara C.; Tenner, Bridget E.

    2013-01-01

    We discuss the advantages of searchable, collaborative, language-independent databases of mathematical results, indexed by "fingerprints" of small and canonical data. Our motivating example is Neil Sloane's massively influential On-Line Encyclopedia of Integer Sequences. We hope to encourage the greater mathematical community to search for the appropriate fingerprints within each discipline, and to compile fingerprint databases of results wherever possible. The benefits of these databases are...

  3. Searching Databases with Keywords

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Kun-Long Zhang

    2005-01-01

    Traditionally, SQL query language is used to search the data in databases. However, it is inappropriate for end-users, since it is complex and hard to learn. It is the need of end-user, searching in databases with keywords, like in web search engines. This paper presents a survey of work on keyword search in databases. It also includes a brief introduction to the SEEKER system which has been developed.

  4. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  5. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  6. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  7. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  8. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  9. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  10. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.;

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  11. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  12. 《中国近现代史纲要》课试题库建设十大关系研究%Ten Relationships in Building Exam Database in Teaching Chinese Contemporary History

    Institute of Scientific and Technical Information of China (English)

    刘永春; 郑亚男; 高京平

    2011-01-01

    由于"试题库模式"具有成本低、见效快、易操作等优势,因此,正在成为承载和推动"中国近现代史纲要"考试制度改革的重要和有效形式。但是,因为考试连接着学校与社会、教学与管理、教师与学生、教学与反馈等环节,因此,在"纲要"课试题库建设中,必须从课程性质和特色出发,以系统论的思维,揭示、把握和体现考试与社会需要、国家需要、教育需要、学生需要和人的需要之间的内在节律。实践证明,理顺这其中的十大关系,是考试制度改革的前提和动力。%With the advantages of low cost,effectiveness and easy operation,exam database has become a major approach to reform the testing system of Chinese Contemporary History.Since an examination has close ties with universities and society,teaching and administration,teachers and students,teaching and feedbacks,the exam database must start with the properties and characteristics of this course,adopt systemic thinking to reveal,control and reflect the internal laws between examination and the needs of country,education and students.These relations have become the premise and serve as the driving force to reform the testing system.

  13. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribution-Sha...re Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: ...e . With regard to this database, you are licensed to: freely access part or whole of this database, and acq

  14. MODELING A GEO-SPATIAL DATABASE FOR MANAGING TRAVELERS’ DEMAND

    Directory of Open Access Journals (Sweden)

    Sunil Pratap Singh

    2014-04-01

    Full Text Available The geo-spatial database is a new technology in database systems which allow storing, retrieving and maintaining the spatial data. In this paper, we seek to design and implement a geo-spatial database for managing the traveler’s demand with the aid of open-source tools and object-relational database package. The building of geo-spatial database starts with the design of data model in terms of conceptual, logical and physical data model and then the design has been implemented into an object-relational database. The geo-spatial database is developed to facilitate the storage of geographic information (where things are with descriptive information (what things are like into the vector model. The developed vector geo-spatial data can be accessed and rendered in the form of map to create the awareness of existence of various services and facilities for prospective travelers and visitors.

  15. Which is a stronger indicator of dental caries: oral hygiene, food, or beverage? A clinical study.

    Science.gov (United States)

    Jain, Poonam; Gary, Julie J

    2014-01-01

    Dental caries is a multifactorial disease with various risk factors. Oral hygiene and dietary factors--specifically, the consumption of snacks and beverages with added sugars--have been shown to be risk indicators for this disease. It is critical for dental professionals to understand the relative roles of each of these food categories in the dental caries process. This article presents a cross-sectional study of 76 people living in a Southern Illinois fluoridated community. The amount of sugar-sweetened beverages, snack food consumption, plaque index, and age showed statistically significant relationships with the outcome variable--dental caries (P < 0.05). The results indicated that dietary factors and oral hygiene both contribute equally to dental caries in young adults living in a fluoridated community. Sugar-sweetened beverage consumption was a much stronger indicator of dental caries than snack food consumption in our study population. PMID:24784517

  16. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum

    CERN Document Server

    Howard, Philip N

    2016-01-01

    Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.

  17. Bone mineral content has stronger association with lean mass than fat mass among Indian urban adolescents

    Directory of Open Access Journals (Sweden)

    Raman K Marwaha

    2015-01-01

    Full Text Available Introduction: There are conflicting reports on the relationship of lean mass (LM and fat mass (FM with bone mineral content (BMC. Given the high prevalence of Vitamin D deficiency in India, we planned the study to evaluate the relationship between LM and FM with BMC in Indian children and adolescents. The objective of the study was to evaluate the relationship of BMC with LM and FM. Materials and Methods: Total and regional BMC, LM, and FM using dual energy X-ray absorptiometry and pubertal staging were assessed in 1403 children and adolescents (boys [B]: 826; girls [G]: 577. BMC index, BMC/LM and BMC/FM ratio, were calculated. Results: The age ranged from 5 to 18 years, with a mean age of 13.2 ± 2.7 years. BMC adjusted for height (BMC index and BMC/height ratio was comparable in both genders. There was no difference in total BMC between genders in the prepubertal group but were higher in more advanced stages of pubertal maturation. The correlation of total as well as regional BMC was stronger for LM (B: Total BMC - 0.880, trunk - 0.715, leg - 0.894, arm - 0.891; G: Total BMC - 0.827, leg - 0.846, arm - 0.815 (all value indicate r2 , P < 0.0001 for all when compared with FM (B: Total BMC - 0.776, trunk - 0.676, leg - 0.772, arm - 0.728; G: Total BMC - 0.781, leg - 0.741, arm - 0.689; all P < 0.0001 except at trunk BMC (LM - 0.682 vs. FM - 0.721; all P < 0.0001, even after controlling for age, height, pubertal stage, and biochemical parameters. Conclusions: BMC had a stronger positive correlation with LM than FM.

  18. Stronger pharmacological cortisol suppression and anticipatory cortisol stress response in transient global amnesia

    Directory of Open Access Journals (Sweden)

    Martin eGriebe

    2015-03-01

    Full Text Available Transient global amnesia (TGA is a disorder characterized by a sudden attack of severe anterograde memory disturbance that is frequently preceded by emotional or physical stress and resolves within 24 hours. By using MRI following the acute episode in TGA patients, small lesions in the hippocampus have been observed. Hence it has been hypothesized that the disorder is caused by a stress-related transient inhibition of memory formation in the hippocampus. To study the factors that may link stress and TGA, we measured the cortisol day-profile, the dexamethasone feedback inhibition and the effect of experimental exposure to stress on cortisol levels (using the socially evaluated cold pressor test and a control procedure in 20 patients with a recent history of TGA and in 20 healthy controls. We used self-report scales of depression, anxiety and stress and a detailed neuropsychological assessment to characterize our collective. We did not observe differences in mean cortisol levels in the cortisol day-profile between the two groups. After administration of low-dose dexamethasone, TGA patients showed significantly stronger cortisol suppression in the daytime profile compared to the control group (p = 0.027. The mean salivary cortisol level was significantly higher in the TGA group prior to and after the experimental stress exposure (p = 0.008; p = 0.010 respectively, as well as prior to and after the control condition (p = 0.022; p= 0.024 respectively. The TGA group had higher scores of depressive symptomatology (p = 0.021 and anxiety (p = 0.007, but the groups did not differ in the neuropsychological assessment. Our findings of a stronger pharmacological suppression and higher cortisol levels in anticipation of experimental stress in participants with a previous TGA indicate a hypersensitivity of the HPA axis. This suggests that an individual stress sensitivity might play a role in the pathophysiology of TGA.

  19. GRAPH DATABASES AND GRAPH VIZUALIZATION

    OpenAIRE

    Klančar, Jure

    2013-01-01

    The thesis presents graph databases. Graph databases are a part of NoSQL databases, which is why this thesis presents basics of NoSQL databases as well. We have focused on advantages of graph databases compared to rela- tional databases. We have used one of native graph databases (Neo4j), to present more detailed processing of graph databases. To get more acquainted with graph databases and its principles, we developed a simple application that uses a Neo4j graph database to...

  20. CDS - Database Administrator's Guide

    Science.gov (United States)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  1. An organic database system

    NARCIS (Netherlands)

    Kersten, M.L.; Siebes, A.P.J.M.

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, gro

  2. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  3. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  4. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  5. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  6. Neutrosophic Relational Database Decomposition

    OpenAIRE

    Meena Arora; Ranjit Biswas; Dr. U.S.Pandey

    2011-01-01

    In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  7. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  8. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  9. 76 FR 74050 - Measured Building Energy Performance Data Taxonomy

    Science.gov (United States)

    2011-11-30

    ... Office of Energy Efficiency and Renewable Energy Measured Building Energy Performance Data Taxonomy... related to a measured building energy performance data taxonomy. DOE has created this measured building energy performance data taxonomy as part of its DOE Buildings Performance Database project....

  10. 汽车焊装车间信息管理数据库的构建%Building a database of information management system for automotive assembly and welding workshop

    Institute of Scientific and Technical Information of China (English)

    梅冬胜; 张忠典; 李冬青; 魏艳红

    2011-01-01

    在充分了解汽车焊装车间手工焊生产线工艺流程和特点的基础上,设计了生产过程信息管理系统的总体结构,确定了系统数据的模型和相互之间的关系.利用图形化编程语言LabView作为开发工具,开发了信息管理系统的操作界面,采用ODBC技术实现了操作界面与数据库之间的无缝链接.本系统的开发与实现能够提高企业的生产效率和产品质量,同时降低企业的生产成本.%Based on investigating the process flows and characteristics of manual welding production line in automotive assembly and welding workshop, the whole structures of information management system were designed, and the models and mutual relationships of data were defined.The graphical programming language LabView was adopted to develop operation interfaces of information management system,the ODBC technology is utilized to seamless connection between operation interfaces and database.The implement of system can improve the production efficiency and quality, at the same reduce production cost.

  11. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  12. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  13. Reactor building

    International Nuclear Information System (INIS)

    The whole reactor building is accommodated in a shaft and is sealed level with the earth's surface by a building ceiling, which provides protection against penetration due to external effects. The building ceiling is supported on walls of the reactor building, which line the shaft and transfer the vertical components of forces to the foundations. The thickness of the walls is designed to withstand horizontal pressure waves in the floor. The building ceiling has an opening above the reactor, which must be closed by cover plates. Operating equipment for the reactor can be situated above the building ceiling. (orig./HP)

  14. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  15. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  16. Can a bog drained for forestry be a stronger carbon sink than a natural bog forest?

    Science.gov (United States)

    Hommeltenberg, J.; Schmid, H. P.; Drösler, M.; Werle, P.

    2014-07-01

    This study compares the CO2 exchange of a natural bog forest, and of a bog drained for forestry in the pre-Alpine region of southern Germany. The sites are separated by only 10 km, they share the same soil formation history and are exposed to the same climate and weather conditions. In contrast, they differ in land use history: at the Schechenfilz site a natural bog-pine forest (Pinus mugo ssp. rotundata) grows on an undisturbed, about 5 m thick peat layer; at Mooseurach a planted spruce forest (Picea abies) grows on drained and degraded peat (3.4 m). The net ecosystem exchange of CO2 (NEE) at both sites has been investigated for 2 years (July 2010-June 2012), using the eddy covariance technique. Our results indicate that the drained, forested bog at Mooseurach is a much stronger carbon dioxide sink (-130 ± 31 and -300 ± 66 g C m-2 a-1 in the first and second year, respectively) than the natural bog forest at Schechenfilz (-53 ± 28 and -73 ± 38 g C m-2 a-1). The strong net CO2 uptake can be explained by the high gross primary productivity of the 44-year old spruces that over-compensates the two-times stronger ecosystem respiration at the drained site. The larger productivity of the spruces can be clearly attributed to the larger plant area index (PAI) of the spruce site. However, even though current flux measurements indicate strong CO2 uptake of the drained spruce forest, the site is a strong net CO2 source when the whole life-cycle since forest planting is considered. It is important to access this result in terms of the long-term biome balance. To do so, we used historical data to estimate the difference between carbon fixation by the spruces and the carbon loss from the peat due to drainage since forest planting. This rough estimate indicates a strong carbon release of +134 t C ha-1 within the last 44 years. Thus, the spruces would need to grow for another 100 years at about the current rate, to compensate the potential peat loss of the former years. In

  17. Rethinking the global secondary organic aerosol (SOA) budget: stronger production, faster removal, shorter lifetime

    Science.gov (United States)

    Hodzic, Alma; Kasibhatla, Prasad S.; Jo, Duseong S.; Cappa, Christopher D.; Jimenez, Jose L.; Madronich, Sasha; Park, Rokjin J.

    2016-06-01

    Recent laboratory studies suggest that secondary organic aerosol (SOA) formation rates are higher than assumed in current models. There is also evidence that SOA removal by dry and wet deposition occurs more efficiently than some current models suggest and that photolysis and heterogeneous oxidation may be important (but currently ignored) SOA sinks. Here, we have updated the global GEOS-Chem model to include this new information on formation (i.e., wall-corrected yields and emissions of semi-volatile and intermediate volatility organic compounds) and on removal processes (photolysis and heterogeneous oxidation). We compare simulated SOA from various model configurations against ground, aircraft and satellite measurements to assess the extent to which these improved representations of SOA formation and removal processes are consistent with observed characteristics of the SOA distribution. The updated model presents a more dynamic picture of the life cycle of atmospheric SOA, with production rates 3.9 times higher and sinks a factor of 3.6 more efficient than in the base model. In particular, the updated model predicts larger SOA concentrations in the boundary layer and lower concentrations in the upper troposphere, leading to better agreement with surface and aircraft measurements of organic aerosol compared to the base model. Our analysis thus suggests that the long-standing discrepancy in model predictions of the vertical SOA distribution can now be resolved, at least in part, by a stronger source and stronger sinks leading to a shorter lifetime. The predicted global SOA burden in the updated model is 0.88 Tg and the corresponding direct radiative effect at top of the atmosphere is -0.33 W m-2, which is comparable to recent model estimates constrained by observations. The updated model predicts a population-weighed global mean surface SOA concentration that is a factor of 2 higher than in the base model, suggesting the need for a reanalysis of the contribution of

  18. 浅析普通高校数据库应用基础教学质量探索与能力培养%On the Basis of University Teaching Quality Database Applications to Explore and Capacity-Building

    Institute of Scientific and Technical Information of China (English)

    覃宝灵

    2013-01-01

    针对当前普通高校在校大学生在学习数据库应用基础这门课程时存在的学习积极性不高、灵活性不强、兴趣不够、知识概念模糊不清,创新性和探索性实践及综合应用能力不强等问题进行分析。提出在教学过程中改革目前传统教学方法和教学模式,以提高教学质量为目标,以“学生为主,教师为辅”为中心的教学方法,以项目案例为导向的教学模式,加强理论教学和实践教学的统一性,加强创新性实践和探索性实践及综合应用能力培养。%For current college students studying in the school based database applications that exist when this course are not en-thusiastic about learning, flexibility is not strong, interesting enough, vague concept of knowledge, innovation and exploratory practice and integrated application ability is not strong analyze the problem. Proposed in the teaching process to reform the cur-rent traditional teaching methods and teaching model to improve teaching quality as the goal,"student-centered, teacher supple-ment" centered teaching methods to project case-oriented teaching mode, strengthen theoretical teaching and practice teaching unity, strengthen innovative practices and exploratory practice and integrated application ability.

  19. Hellenic Woodland Database

    OpenAIRE

    Fotiadis, Georgios; Tsiripidis, Ioannis; Bergmeier, Erwin; Dimopolous, Panayotis

    2012-01-01

    The Hellenic Woodland Database (GIVD ID EU-GR-006) includes relevés from 59 sources, approximately, as well as unpublished relevés. In total 4,571 relevés have already been entered in the database, but the database is going to continue growing in the near future. Species abundances are recorded according the 7-grade Braun-Blanquet scale. The oldest relevés date back to 1963. For the majority of relevés (more than 90%) environmental data (e.g. altitude, slope aspect, inclination) exis...

  20. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  1. ALICE Geometry Database

    CERN Document Server

    Santo, J

    1999-01-01

    The ALICE Geometry Database project consists of the development of a set of data structures to store the geometrical information of the ALICE Detector. This Database will be used in Simulation, Reconstruction and Visualisation and will interface with existing CAD systems and Geometrical Modellers.At the present time, we are able to read a complete GEANT3 geometry, to store it in our database and to visualise it. On disk, we store different geometry files in hierarchical fashion, and all the nodes, materials, shapes, configurations and transformations distributed in this tree structure. The present status of the prototype and its future evolution will be presented.

  2. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  3. Building America

    Energy Technology Data Exchange (ETDEWEB)

    Brad Oberg

    2010-12-31

    IBACOS researched the constructability and viability issues of using high performance windows as one component of a larger approach to building houses that achieve the Building America 70% energy savings target.

  4. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The contact informatio...p://www.tanpaku.org/tdb/ ) is opened. Joomla SEF URLs by Artio About This Database Database Description Download License Update... History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  5. The integrated web service and genome database for agricultural plants with biotechnology information

    OpenAIRE

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information...

  6. Search Algorithms for Conceptual Graph Databases

    Directory of Open Access Journals (Sweden)

    Abdurashid Mamadolimov

    2013-03-01

    Full Text Available We consider a database composed of a set of conceptual graphs. Using conceptual graphs and graphhomomorphism it is possible to build a basic query-answering mechanism based on semantic search.Graph homomorphism defines a partial order over conceptual graphs. Since graph homomorphismchecking is an NP-Complete problem, the main requirement for database organizing and managingalgorithms is to reduce the number of homomorphism checks. Searching is a basic operation for databasemanipulating problems. We consider the problem of searching for an element in a partially ordered set.The goal is to minimize the number of queries required to find a target element in the worst case. First weanalyse conceptual graph database operations. Then we propose a new algorithm for a subclass of lattices.Finally, we suggest a parallel search algorithm for a general poset.

  7. Cooperative answers in database systems

    Science.gov (United States)

    Gaasterland, Terry; Godfrey, Parke; Minker, Jack; Novik, Lev

    1993-01-01

    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system.

  8. Unit 43 - Database Concepts I

    OpenAIRE

    Unit 61, CC in GIS; White, Gerald (ACER)

    1990-01-01

    This unit outlines fundamental concepts in database systems and their integration with GIS, including advantages of a database approach, views of a database, database management systems (DBMS), and alternative database models. Three models—hierarchical, network and relational—are discussed in greater detail.

  9. The Jungle Database Search Engine

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Dyreson, Curtis

    1999-01-01

    Information spread in in databases cannot be found by current search engines. A database search engine is capable to access and advertise database on the WWW. Jungle is a database search engine prototype developed at Aalborg University. Operating through JDBC connections to remote databases, Jung...

  10. Some operations on database universes

    OpenAIRE

    Brock, E.O. de

    1997-01-01

    Operations such as integration or modularization of databases can be considered as operations on database universes. This paper describes some operations on database universes. Formally, a database universe is a special kind of table. It turns out that various operations on tables constitute interesting operations on database universes as well.

  11. Solar building

    OpenAIRE

    Zhang, Luxin

    2014-01-01

    In my thesis I describe the utilization of solar energy and solar energy with building integration. In introduction it is also mentioned how the solar building works, trying to make more people understand and accept the solar building. The thesis introduces different types of solar heat collectors. I compared the difference two operation modes of solar water heating system and created examples of solar water system selection. I also introduced other solar building applications. It is conv...

  12. Brain Potentials Highlight Stronger Implicit Food Memory for Taste than Health and Context Associations.

    Science.gov (United States)

    Hoogeveen, Heleen R; Jolij, Jacob; Ter Horst, Gert J; Lorist, Monicque M

    2016-01-01

    Increasingly consumption of healthy foods is advised to improve population health. Reasons people give for choosing one food over another suggest that non-sensory features like health aspects are appreciated as of lower importance than taste. However, many food choices are made in the absence of the actual perception of a food's sensory properties, and therefore highly rely on previous experiences of similar consumptions stored in memory. In this study we assessed the differential strength of food associations implicitly stored in memory, using an associative priming paradigm. Participants (N = 30) were exposed to a forced-choice picture-categorization task, in which the food or non-food target images were primed with either non-sensory or sensory related words. We observed a smaller N400 amplitude at the parietal electrodes when categorizing food as compared to non-food images. While this effect was enhanced by the presentation of a food-related word prime during food trials, the primes had no effect in the non-food trials. More specifically, we found that sensory associations are stronger implicitly represented in memory as compared to non-sensory associations. Thus, this study highlights the neuronal mechanisms underlying previous observations that sensory associations are important features of food memory, and therefore a primary motive in food choice. PMID:27213567

  13. Stronger activation of SREBP-1a by nucleus-localized HBx

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qi [VIDO-InterVac, Veterinary Microbiology, University of Saskatchewan, Saskatoon (Canada); Qiao, Ling [VIDO-InterVac, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Yang, Jian [Drug Discovery Group, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Zhou, Yan [VIDO-InterVac, Veterinary Microbiology, Vaccinology and Immunotherapeutics, University of Saskatchewan, Saskatoon, Saskatchewan (Canada); Liu, Qiang, E-mail: qiang.liu@usask.ca [VIDO-InterVac, Veterinary Microbiology, Vaccinology and Immunotherapeutics, University of Saskatchewan, Saskatoon, Saskatchewan (Canada)

    2015-05-08

    We previously showed that hepatitis B virus (HBV) X protein activates the sterol regulatory element-binding protein-1a (SREBP-1a). Here we examined the role of nuclear localization of HBx in this process. In comparison to the wild-type and cytoplasmic HBx, nuclear HBx had stronger effects on SREBP-1a and fatty acid synthase transcription activation, intracellular lipid accumulation and cell proliferation. Furthermore, nuclear HBx could activate HBV enhancer I/X promoter and was more effective on up-regulating HBV mRNA level in the context of HBV replication than the wild-type HBx, while the cytoplasmic HBx had no effect. Our results demonstrate the functional significance of the nucleus-localized HBx in regulating host lipogenic pathway and HBV replication. - Highlights: • Nuclear HBx is more effective on activating SREBP-1a and FASN transcription. • Nuclear HBx is more effective on enhancing intracellular lipid accumulation. • Nuclear HBx is more effective on enhancing cell proliferation. • Nuclear HBx up-regulates HBV enhancer I/X promoter activity. • Nuclear HBx increases HBV mRNA level in the context of HBV replication.

  14. Plant Identity Exerts Stronger Effect than Fertilization on Soil Arbuscular Mycorrhizal Fungi in a Sown Pasture.

    Science.gov (United States)

    Zheng, Yong; Chen, Liang; Luo, Cai-Yun; Zhang, Zhen-Hua; Wang, Shi-Ping; Guo, Liang-Dong

    2016-10-01

    Arbuscular mycorrhizal (AM) fungi play key roles in plant nutrition and plant productivity. AM fungal responses to either plant identity or fertilization have been investigated. However, the interactive effects of different plant species and fertilizer types on these symbiotic fungi remain poorly understood. We evaluated the effects of the factorial combinations of plant identity (grasses Avena sativa and Elymus nutans and legume Vicia sativa) and fertilization (urea and sheep manure) on AM fungi following 2-year monocultures in a sown pasture field study. AM fungal extraradical hyphal density was significantly higher in E. nutans than that in A. sativa and V. sativa in the unfertilized control and was significantly increased by urea and manure in A. sativa and by manure only in E. nutans, but not by either fertilizers in V. sativa. AM fungal spore density was not significantly affected by plant identity or fertilization. Forty-eight operational taxonomic units (OTUs) of AM fungi were obtained through 454 pyrosequencing of 18S rDNA. The OTU richness and Shannon diversity index of AM fungi were significantly higher in E. nutans than those in V. sativa and/or A. sativa, but not significantly affected by any fertilizer in all of the three plant species. AM fungal community composition was significantly structured directly by plant identity only and indirectly by both urea addition and plant identity through soil total nitrogen content. Our findings highlight that plant identity has stronger influence than fertilization on belowground AM fungal community in this converted pastureland from an alpine meadow.

  15. AMOC sensitivity to surface buoyancy fluxes: Stronger ocean meridional heat transport with a weaker volume transport?

    Science.gov (United States)

    Sévellec, Florian; Fedorov, Alexey V.

    2016-09-01

    Oceanic northward heat transport is commonly assumed to be positively correlated with the Atlantic meridional overturning circulation (AMOC). For example, in numerical "water-hosing" experiments, imposing anomalous freshwater fluxes in the northern Atlantic leads to a slow-down of the AMOC and the corresponding reduction of oceanic northward heat transport. Here, we study the sensitivity of the ocean heat and volume transports to surface heat and freshwater fluxes using a generalized stability analysis. For the sensitivity to surface freshwater fluxes, we find that, while the direct relationship between the AMOC volume and heat transports holds on shorter time scales, it can reverse on timescales longer than 500 years or so. That is, depending on the model surface boundary conditions, reduction in the AMOC volume transport can potentially lead to a stronger heat transport on long timescales, resulting from the gradual increase in ocean thermal stratification. We discuss the implications of these results for the problem of steady state (statistical equilibrium) in ocean and climate GCM as well as paleoclimate problems including millennial climate variability.

  16. Recommendations on Formative Assessment and Feedback Practices for stronger engagement in MOOCs

    Directory of Open Access Journals (Sweden)

    Nikolaos Floratos

    2015-04-01

    Full Text Available Many publications and surveys refer to the high drop out rate in Massive Open Online Courses (MOOCs which is around 90%, especially if we compare the number of students who register against those who finish. Working towards improving student engagement in MOOCs, we focus on providing specific research-based recommendations on formative assessment and feedback practices that can advance student activity. In this respect, we analysed some significant research papers on formative assessment and feedback methods applicable to face-to-face teaching environments that advance student engagement, and concluded with related requirements and conditions that can be applied also to MOOCs. We also analysed 4050 comments and reviews of the seven most active and highly rated MOOCs (6 Coursera ones and 1 from EdX provided by the students who have mainly completed those courses via CourseTalk. Based on this content analysis, we have formulated fourteen recommendations that support also the requirements/conditions of our conceptual and theoretical framework analysis. The results obtained give some light in a rather unexplored research area, which is the research on formative assessment and feedback practices specifically for stronger engagement in MOOCs. http://dx.doi.org/10.5944/openpraxis.7.2.194

  17. Brain Potentials Highlight Stronger Implicit Food Memory for Taste than Health and Context Associations.

    Directory of Open Access Journals (Sweden)

    Heleen R Hoogeveen

    Full Text Available Increasingly consumption of healthy foods is advised to improve population health. Reasons people give for choosing one food over another suggest that non-sensory features like health aspects are appreciated as of lower importance than taste. However, many food choices are made in the absence of the actual perception of a food's sensory properties, and therefore highly rely on previous experiences of similar consumptions stored in memory. In this study we assessed the differential strength of food associations implicitly stored in memory, using an associative priming paradigm. Participants (N = 30 were exposed to a forced-choice picture-categorization task, in which the food or non-food target images were primed with either non-sensory or sensory related words. We observed a smaller N400 amplitude at the parietal electrodes when categorizing food as compared to non-food images. While this effect was enhanced by the presentation of a food-related word prime during food trials, the primes had no effect in the non-food trials. More specifically, we found that sensory associations are stronger implicitly represented in memory as compared to non-sensory associations. Thus, this study highlights the neuronal mechanisms underlying previous observations that sensory associations are important features of food memory, and therefore a primary motive in food choice.

  18. Brain Potentials Highlight Stronger Implicit Food Memory for Taste than Health and Context Associations.

    Science.gov (United States)

    Hoogeveen, Heleen R; Jolij, Jacob; Ter Horst, Gert J; Lorist, Monicque M

    2016-01-01

    Increasingly consumption of healthy foods is advised to improve population health. Reasons people give for choosing one food over another suggest that non-sensory features like health aspects are appreciated as of lower importance than taste. However, many food choices are made in the absence of the actual perception of a food's sensory properties, and therefore highly rely on previous experiences of similar consumptions stored in memory. In this study we assessed the differential strength of food associations implicitly stored in memory, using an associative priming paradigm. Participants (N = 30) were exposed to a forced-choice picture-categorization task, in which the food or non-food target images were primed with either non-sensory or sensory related words. We observed a smaller N400 amplitude at the parietal electrodes when categorizing food as compared to non-food images. While this effect was enhanced by the presentation of a food-related word prime during food trials, the primes had no effect in the non-food trials. More specifically, we found that sensory associations are stronger implicitly represented in memory as compared to non-sensory associations. Thus, this study highlights the neuronal mechanisms underlying previous observations that sensory associations are important features of food memory, and therefore a primary motive in food choice.

  19. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RPSD Datab...ase Description General information of database Database name RPSD Alternative name Summary inform...n National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Database classification Structure Datab...idopsis thaliana Taxonomy ID: 3702 Taxonomy Name: Glycine max Taxonomy ID: 3847 Database description We have...nts such as rice, and have put together the result and related informations. This database contains the basi

  20. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information...

  1. Navigating public microarray databases.

    Science.gov (United States)

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources. PMID:18629145

  2. Dietary Supplement Label Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The database is designed to help both the general public and health care providers find information about ingredients in brand-name products, including name, form,...

  3. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  4. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  5. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  6. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  7. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  8. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  9. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær;

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  10. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  11. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  12. Disaster Debris Recovery Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 3,500 composting facilities, demolition contractors, haulers, transfer...

  13. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment...

  14. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  15. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  16. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  17. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  18. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  19. Eldercare Locator Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Eldercare Locator is a searchable database that allows a user to search via zip code or city/ state for agencies at the State and local levels that provide...

  20. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states...

  1. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication....

  2. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  3. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  4. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  5. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  6. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  7. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  8. Chemical Kinetics Database

    Science.gov (United States)

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  9. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  10. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  11. Querying genomic databases

    Energy Technology Data Exchange (ETDEWEB)

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  12. Database computing in HEP

    International Nuclear Information System (INIS)

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  13. Fashion Information Database

    Institute of Scientific and Technical Information of China (English)

    LI Jun; WU Hai-yan; WANG Yun-yi

    2002-01-01

    In the field of fashion industry, it is a bottleneck of how to control and apply the information in the procedure of fashion merchandising. By the aid of digital technology,a perfect and practical fashion information database could be established so that high- quality and efficient,low-cost and characteristic fashion merchandising system could be realized. The basic structure of fashion information database is discussed.

  14. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik;

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  15. APPLICATION OF GEOGRAPHICAL PARAMETER DATABASE TO ESTABLISHMENT OF UNIT POPULATION DATABASE

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Now GIS is turning into a good tool in handling geographical, economical, and population data, so we can obtain more and more information from these data. On the other hand, in some cases, for a calamity, such as hurricane, earthquake, flood, drought etc., or a decision-making, such as setting up a broadcasting transmitter, building a chemical plant etc., we have to evaluate the total population in the region influenced by a calamity or a project. In this paper, a method is put forward to evaluate the population in such special region. Through exploring the correlation of geographical parameters and the distribution of people in the same region by means of quantitative analysis and qualitative analysis, unit population database (1km× 1km) is established. In this way, estimating the number of people in a special region is capable by adding up the population in every grid involved in this region boundary. The geographical parameters are obtained from topographic database and DEM database on the scale of 1∶ 250 000. The fundamental geographical parameter database covering county administrative boundaries and 1km× 1km grid is set up and the population database at county level is set up as well. Both geographical parameter database and unit population database are able to offer sufficient conditions for quantitative analysis. They will have important role in the research fields of data mining (DM), Decision-making Support Systems (DSS), and regional sustainable development.

  16. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  17. Cloud Databases: A Paradigm Shift in Databases

    OpenAIRE

    Indu Arora; Anu Gupta

    2012-01-01

    Relational databases ruled the Information Technology (IT) industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of Wor...

  18. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Yeast Interacting Proteins Database Update History of This Database Date Update contents 2010/03/29 Yeast In...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update

  19. A Secure Database Encryption Scheme

    OpenAIRE

    Zongkai Yang; Samba Sesay; Jingwen Chen; Du Xu

    2004-01-01

    The need to protect database, would be an every growing one especially so in this age of e-commerce. Many conventional database security systems are bugged with holes that can be used by attackers to penetrate the database. No matter what degree of security is put in place, sensitive data in database are still vulnerable to attack. To avoid the risk posed by this threat, database encryption has been recommended. However encrypting all of database item will greatly degrade ...

  20. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  1. DCC Briefing Paper: Database archiving

    OpenAIRE

    Müller, Heiko

    2009-01-01

    In a computational context, data archiving refers to the storage of electronic documents, data sets, multimedia files, and so on, for a defined period of time. Database archiving is usually seen as a subset of data archiving. Database archiving focuses on archiving data that are maintained under the control of a database management system and structured under a database schema, e.g., a relational database. The primary goal of database archiving is to maintain access to data in case it is late...

  2. Sharper, Stronger, Faster Upper Visual Field Representation in Primate Superior Colliculus.

    Science.gov (United States)

    Hafed, Ziad M; Chen, Chih-Yang

    2016-07-11

    Visually guided behavior in three-dimensional environments entails handling immensely different sensory and motor conditions across retinotopic visual field locations: peri-personal ("near") space is predominantly viewed through the lower retinotopic visual field (LVF), whereas extra-personal ("far") space encompasses the upper visual field (UVF). Thus, when, say, driving a car, orienting toward the instrument cluster below eye level is different from scanning an upcoming intersection, even with similarly sized eye movements. However, an overwhelming assumption about visuomotor circuits for eye-movement exploration, like those in the primate superior colliculus (SC), is that they represent visual space in a purely symmetric fashion across the horizontal meridian. Motivated by ecological constraints on visual exploration of far space, containing small UVF retinal-image features, here we found a large, multi-faceted difference in the SC's representation of the UVF versus LVF. Receptive fields are smaller, more finely tuned to image spatial structure, and more sensitive to image contrast for neurons representing the UVF. Stronger UVF responses also occur faster. Analysis of putative synaptic activity revealed a particularly categorical change when the horizontal meridian is crossed, and our observations correctly predicted novel eye-movement effects. Despite its appearance as a continuous layered sheet of neural tissue, the SC contains functional discontinuities between UVF and LVF representations, paralleling a physical discontinuity present in cortical visual areas. Our results motivate the recasting of structure-function relationships in the visual system from an ecological perspective, and also exemplify strong coherence between brain-circuit organization for visually guided exploration and the nature of the three-dimensional environment in which we function. PMID:27291052

  3. Neural correlates of superior intelligence: stronger recruitment of posterior parietal cortex.

    Science.gov (United States)

    Lee, Kun Ho; Choi, Yu Yong; Gray, Jeremy R; Cho, Sun Hee; Chae, Jeong-Ho; Lee, Seungheun; Kim, Kyungjin

    2006-01-15

    General intelligence (g) is a common factor in diverse cognitive abilities and a major influence on life outcomes. Neuroimaging studies in adults suggest that the lateral prefrontal and parietal cortices play a crucial role in related cognitive activities including fluid reasoning, the control of attention, and working memory. Here, we investigated the neural bases for intellectual giftedness (superior-g) in adolescents, using fMRI. The participants consisted of a superior-g group (n = 18, mean RAPM = 33.9 +/- 0.8, >99%) from the national academy for gifted adolescents and the control group (n = 18, mean RAPM = 22.8 +/- 1.6, 60%) from local high schools in Korea (mean age = 16.5 +/- 0.8). fMRI data were acquired while they performed two reasoning tasks with high and low g-loadings. In both groups, the high g-loaded tasks specifically increased regional activity in the bilateral fronto-parietal network including the lateral prefrontal, anterior cingulate, and posterior parietal cortices. However, the regional activations of the superior-g group were significantly stronger than those of the control group, especially in the posterior parietal cortex. Moreover, regression analysis revealed that activity of the superior and intraparietal cortices (BA 7/40) strongly covaried with individual differences in g (r = 0.71 to 0.81). A correlated vectors analysis implicated bilateral posterior parietal areas in g. These results suggest that superior-g may not be due to the recruitment of additional brain regions but to the functional facilitation of the fronto-parietal network particularly driven by the posterior parietal activation.

  4. Protective effect and mechanism of stronger neo-minophagen C against fulminant hepatic failure

    Institute of Scientific and Technical Information of China (English)

    Bao-Shan Yang; Ying-Ji Ma; Yan Wang; Li-Yan Chen; Man-Ru Bi; Bing-Zhu Yan; Lu Bai; Hui Zhou; Fu-Xiang Wang

    2007-01-01

    AIM: To investigate the protective effect of stronger neo-minophafen C (SNMC) on fulminant hepatic failure (FHF) and its underlying mechanism.METHODS: A mouse model of FHF was established by intraperitoneal injection of galactosamine (D-Gal N) and lipopolysaccharide (LPS). The survival rate, liver function,inflammatory factor and liver pathological change were obtained with and without SNMC treatment. Hepatocyte survival was estimated by observing the stained mitochondria structure with terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate fluorescence nick end labeling (TUNEL) method and antibodies against cytochrome C (Cyt-C) and caspase-3.RESULTS: The levels of plasma tumor necrosis factor alpha (TNF-α), nitric oxide (NO), ET-1, interleukin-6(IL-6), and the degree of hepatic tissue injury were decreased in the SNMC-treated groups compared with those in the model group (P < 0.01). However, there were no differences after different dosages administered at different time points. There was a significant difference in survival rates between the SNMC-treated groups and the model group (P < 0.01). The apoptosis index was 32.3% at 6 h after a low dose of SNMC, which was considerably decreased from 32.3%±4.7% vs 5%± 2.83% (P<0.05) to 5% on d 7. The expression of Cyt-C and caspase-3 decreased with the prolongation of therapeutic time. Typical hepatocyte apoptosis was obviously ameliorated under electron microscope with the prolongation of therapeutic time.CONCLUSION: SNMC can effectively protect liver against FHF induced by LPS/D-Gal N. SNMC can prevent hepatocyte apoptosis by inhibiting inflammatory reaction and stabilizing mitochondria membrane to suppress the release of Cyt-C and sequent activation of caspase-3.

  5. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  6. Chemical Explosion Database

    Science.gov (United States)

    Johansson, Peder; Brachet, Nicolas

    2010-05-01

    A database containing information on chemical explosions, recorded and located by the International Data Center (IDC) of the CTBTO, should be established in the IDC prior to entry into force of the CTBT. Nearly all of the large chemical explosions occur in connection with mining activity. As a first step towards the establishment of this database, a survey of presumed mining areas where sufficiently large explosions are conducted has been done. This is dominated by the large coal mining areas like the Powder River (U.S.), Kuznetsk (Russia), Bowen (Australia) and Ekibastuz (Kazakhstan) basins. There are also several other smaller mining areas, in e.g. Scandinavia, Poland, Kazakhstan and Australia, with large enough explosions for detection. Events in the Reviewed Event Bulletin (REB) of the IDC that are located in or close to these mining areas, and which therefore are candidates for inclusion in the database, have been investigated. Comparison with a database of infrasound events has been done as many mining blasts generate strong infrasound signals and therefore also are included in the infrasound database. Currently there are 66 such REB events in 18 mining areas in the infrasound database. On a yearly basis several hundreds of events in mining areas have been recorded and included in the REB. Establishment of the database of chemical explosions requires confirmation and ground truth information from the States Parties regarding these events. For an explosion reported in the REB, the appropriate authority in whose country the explosion occurred is encouraged, on a voluntary basis, to seek out information on the explosion and communicate this information to the IDC.

  7. Data management for biofied building

    Science.gov (United States)

    Matsuura, Kohta; Mita, Akira

    2015-03-01

    Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.

  8. The initial building and application of the DNA specimen database in Uygur Chinese population with depression%新疆维吾尔族抑郁症基因样本库的初步建立及应用

    Institute of Scientific and Technical Information of China (English)

    张丽丽; 韩书贤; 安治国; 钟衔江; 罗晓; 伊琦忠

    2014-01-01

    Objective To built up the normative DNA specimen bank of Uyghur population with depression initially,and to analyze the polymorphism of TPH-2 gene.Methods 800 Uyghur patients with depression were collected by the demography,clinical data and DNA specimens from January,2010 to January 2013. After quality control,DNA specimens were retained in the vital diseases repository of the first Affiliated Hospital to Xinjiang Medical University,then we set up the clinical data repository,and assessed the pos-sible risk factors of the disease.According to direct DAN sequencing ,we investigated the association be-tween the polymorphism of TPH-2 gene and suicidal tendencies of depression.Results (1)The result of preliminary application shows that 99.6% of samples were genotyped successfully for rs7305115.(2)The significant differences of HAMD-1 7 scores were found between groups with different educational back-ground or professions (P 0.05).(4)rs7305115 was not found to have genotypic or al-lelic association with suicidal tendencies between the cases with suicide intension and those without (P >0.05).Conclusion The normative DNA specimen database of Uyghur population with depression was built up initially.Literacy level and social status were associated with the condition of the disease.But it failed to find the correlation between TPH-2 gene and depression.%目的:初步建立标准的新疆维吾尔族抑郁症 DNA样品库,并进行 TPH-2基因多态性的研究。方法收集2010年1月-2013年1月于新疆医科大学第一附属医院精神科及新疆其他精神病院治疗的800例维吾尔族抑郁症患者的一般人口学资料、临床信息及DNA样本。DNA样品经质量控制后,保存于新疆医科大学第一附属医院重大疾病资源标本库,并建立与之一一对应的临床资料数据库,进行危险因素的评估。采用 DNA直接测序法进行TPH-2基因多态性与抑郁症自杀倾向的关联性研究。结果(1)

  9. Beyond 1-Safety and 2-Safety for Replicated Databases: Group-Safety

    OpenAIRE

    Wiesmann, M.; Schiper, A.

    2003-01-01

    In this paper, we study the safety guarantees of group communication-based database replication techniques. We show that there is a model mismatch between group communication and database, and because of this, classical group communication systems cannot be used to build 2-safe database replication. We propose a new group communication primitive called \\emph{end-to-end atomic broadcast} that solves the problem, i.e., can be used to implement 2-safe database replication. We also introduce...

  10. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  11. Integrating Paleoecological Databases

    Science.gov (United States)

    Blois, Jessica; Goring, Simon; Smith, Alison

    2011-02-01

    Neotoma Consortium Workshop; Madison, Wisconsin, 23-26 September 2010 ; Paleoecology can contribute much to global change science, as paleontological records provide rich information about species range shifts, changes in vegetation composition and productivity, aquatic and terrestrial ecosystem responses to abrupt climate change, and paleoclimate reconstruction, for example. However, while paleoecology is increasingly a multidisciplinary, multiproxy field focused on biotic responses to global change, most paleo databases focus on single-proxy groups. The Neotoma Paleoecology Database (http://www.neotomadb.org) aims to remedy this limitation by integrating discipline-specific databases to facilitate cross-community queries and analyses. In September, Neotoma consortium members and representatives from other databases and data communities met at the University of Wisconsin-Madison to launch the second development phase of Neotoma. The workshop brought together 54 international specialists, including Neotoma data stewards, users, and developers. Goals for the meeting were fourfold: (1) develop working plans for existing data communities; (2) identify new data types and sources; (3) enhance data access, visualization, and analysis on the Neotoma Web site; and (4) coordinate with other databases and cooperate in tool development and sharing.

  12. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  13. FishTraits Database

    Science.gov (United States)

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  14. Automatic Detection of Buildings and Changes in Buildings for Updating of Maps

    Directory of Open Access Journals (Sweden)

    Harri Kaartinen

    2010-04-01

    Full Text Available There is currently high interest in developing automated methods to assist the updating of map databases. This study presents methods for automatic detection of buildings and changes in buildings from airborne laser scanner and digital aerial image data and shows the potential usefulness of the methods with thorough experiments in a 5 km2 suburban study area. 96% of buildings larger than 60 m2 were correctly detected in the building detection. The completeness and correctness of the change detection for buildings larger than 60 m2 were about 85% (including five classes. Most of the errors occurred in small or otherwise problematic buildings.

  15. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us GETDB Datab...ase Description General information of database Database name GETDB Alternative name Gal4 Enhancer Trap Insertion Datab... +81-78-306-3183 E-mail: Database classification Expression Invertebrate genome database Organism Taxonomy N...ame: Drosophila melanogaster Taxonomy ID: 7227 Database description About 4,600 i... relationship with gene was identified for 2,157 independent sites. This database is available to the public as the datab

  16. Laboratory Building.

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Joshua M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    This report is an analysis of the means of egress and life safety requirements for the laboratory building. The building is located at Sandia National Laboratories (SNL) in Albuquerque, NM. The report includes a prescriptive-based analysis as well as a performance-based analysis. Following the analysis are appendices which contain maps of the laboratory building used throughout the analysis. The top of all the maps is assumed to be north.

  17. Optimizing Spatial Databases

    Directory of Open Access Journals (Sweden)

    Anda VELICANU

    2010-01-01

    Full Text Available This paper describes the best way to improve the optimization of spatial databases: through spatial indexes. The most commune and utilized spatial indexes are R-tree and Quadtree and they are presented, analyzed and compared in this paper. Also there are given a few examples of queries that run in Oracle Spatial and are being supported by an R-tree spatial index. Spatial databases offer special features that can be very helpful when needing to represent such data. But in terms of storage and time costs, spatial data can require a lot of resources. This is why optimizing the database is one of the most important aspects when working with large volumes of data.

  18. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina;

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  19. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  20. Additive Pattern Database Heuristics

    CERN Document Server

    Felner, A; Korf, R E; 10.1613/jair.1480

    2011-01-01

    We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases, which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously, we showed how to statically partition the sliding-tile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods a...

  1. Mouse genome database 2016.

    Science.gov (United States)

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  2. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  3. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  4. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  5. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  6. Pressen, personoplysninger og databaser

    DEFF Research Database (Denmark)

    Schaumburg-Müller, Sten

    2006-01-01

    Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven......Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven...

  7. The CHIANTI atomic database

    CERN Document Server

    Young, Peter R; Landi, Enrico; Del Zanna, Giulio; Mason, Helen

    2015-01-01

    The CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. The database has continued to be updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  8. A database devoted to the insects of the cultural heritage

    OpenAIRE

    Fabien Fohrer; Michel Martinez; Franck Dorkeld

    2011-01-01

    This database, implemented by both the CICRP and the INRA, gathers the most important pests affecting the cultural heritage. These insects represent a serious threat to the preservation of cultural properties such as museum collections, libraries and archives, movable objects and immovable objects in historical buildings. It is an easy tool for identifying the species of interest. It also permits very prompt undertaking of the required actions against the infestations. This database is of int...

  9. SURFACE: a database of protein surface regions for functional annotation

    OpenAIRE

    Ferrè, Fabrizio; Ausiello, Gabriele; Zanzoni, Andreas; Helmer-Citterich, Manuela

    2004-01-01

    The SURFACE (SUrface Residues and Functions Annotated, Compared and Evaluated, URL http://cbm.bio.uniroma2.it/surface/) database is a repository of annotated and compared protein surface regions. SURFACE contains the results of a large-scale protein annotation and local structural comparison project. A non-redundant set of protein chains is used to build a database of protein surface patches, defined as putative surface functional sites. Each patch is annotated with sequence and structure-der...

  10. DataBase on Demand

    Science.gov (United States)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  11. DataBase on Demand

    International Nuclear Information System (INIS)

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  12. DataBase on demand

    CERN Document Server

    Aparicio, Ruben Gaspar; Coterillo Coz, I

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  13. REXUS/BEXUS: launching student experiments -a step towards a stronger space science community

    Science.gov (United States)

    Fittock, Mark; Stamminger, Andreas; Maria, Roth; Dannenberg, Kristine; Page, Helen

    The REXUS/BEXUS (Rocket/Balloon Experiments for University Students) programme pro-vides opportunities to teams of European student scientists and engineers to fly experiments on sounding rockets and high altitude balloons. This is an opportunity for students and the scientific community to benefit from encouragement and support for experiments. An important feature of the programme is that the students experience a full project life-cycle which is typically not a part of their university education and which helps to prepare them for further scientific work. They have to plan, organize, and control their project in order to develop and build up an experiment but must also work on the scientic aspects. Many of the students continue to work in the field on which they focused in the programme and can often build upon both the experience and the results from flight. Within the REXUS/BEXUS project cycle, they are encouraged to write and present papers about their experiments and results; increasing amounts of scientific output are seen from the students who participate. Not only do the students learn and develop from REXUS/BEXUS but the scientific community also reaps significant benefits. Another major benefit of the programme is the promotion that the students are able to bring to the whole space community. Not only are the public made more aware of advanced science and technical concepts but an advantage is present in the contact that the students who participate have to other university level students. Students are less restricted in their publicity and attract large public followings online as well as presenting themselves in more traditional media outlets. Many teams' creative approach to outreach is astonishing. The benefits are not only for the space science community as a whole; institutes, universities and departments can see increased interest following the support of participating students in the programme. The programme is realized under a bilateral Agency

  14. LogiQL a query language for smart databases

    CERN Document Server

    Halpin, Terry

    2014-01-01

    LogiQL is a new state-of-the-art programming language based on Datalog. It can be used to build applications that combine transactional, analytical, graph, probabilistic, and mathematical programming. LogiQL makes it possible to build hybrid applications that previously required multiple programming languages and databases. In this first book to cover LogiQL, the authors explain how to design, implement, and query deductive databases using this new programming language. LogiQL's declarative approach enables complex data structures and business rules to be simply specified and then automaticall

  15. Database design: Community discussion board

    OpenAIRE

    Klepetko, Radim

    2009-01-01

    The goal of this thesis is designing a database for discussion board application, which will be able to provide classic discussion board functionality and web 2.0 features in addition. The emphasis lies on a precise description of the application requirements, which are used afterwards to design an optimal database model independent from technological implementations (chosen database system). In the end of my thesis the database design is tested using MySQL database system.

  16. The Database State Machine Approach

    OpenAIRE

    Pedone, Fernando; Guerraoui, Rachid; Schiper, Andre

    1999-01-01

    Database replication protocols have historically been built on top of distributed database systems, and have consequently been designed and implemented using distributed transactional mechanisms, such as atomic commitment. We present the Database State Machine approach, a new way to deal with database replication in a cluster of servers. This approach relies on a powerful atomic broadcast primitive to propagate transactions between database servers, and alleviates the need for atomic comm...

  17. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell;

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval, and di...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  18. GRAD: On Graph Database Modeling

    OpenAIRE

    Ghrab, Amine; Romero, Oscar; Skhiri, Sabri; Vaisman, Alejandro; Zimányi, Esteban

    2016-01-01

    Graph databases have emerged as the fundamental technology underpinning trendy application domains where traditional databases are not well-equipped to handle complex graph data. However, current graph databases support basic graph structures and integrity constraints with no standard algebra. In this paper, we introduce GRAD, a native and generic graph database model. GRAD goes beyond traditional graph database models, which support simple graph structures and constraints. Instead, GRAD pres...

  19. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias;

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The datab...

  20. Databases and data mining

    Science.gov (United States)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  1. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.; Liu, K.; Kolkman, M.; Klarenberg, P.; Slooten, van F.; Ades, Y.; Slooten, van C.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  2. Dutch Vegetation Database (LVD)

    NARCIS (Netherlands)

    Hennekens, S.M.

    2011-01-01

    The Dutch Vegetation Database (LVD) hosts information on all plant communities in the Netherlands. This substantial archive consists of over 600.000 recent and historic vegetation descriptions. The data provide information on more than 85 years of vegetation recording in various habitats covering te

  3. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  4. On the road to a stronger public health workforce: visual tools to address complex challenges.

    Science.gov (United States)

    Drehobl, Patricia; Stover, Beth H; Koo, Denise

    2014-11-01

    The public health workforce is vital to protecting the health and safety of the public, yet for years, state and local governmental public health agencies have reported substantial workforce losses and other challenges to the workforce that threaten the public's health. These challenges are complex, often involve multiple influencing or related causal factors, and demand comprehensive solutions. However, proposed solutions often focus on selected factors and might be fragmented rather than comprehensive. This paper describes approaches to characterizing the situation more comprehensively and includes two visual tools: (1) a fishbone, or Ishikawa, diagram that depicts multiple factors affecting the public health workforce; and (2) a roadmap that displays key elements-goals and strategies-to strengthen the public health workforce, thus moving from the problems depicted in the fishbone toward solutions. The visual tools aid thinking about ways to strengthen the public health workforce through collective solutions and to help leverage resources and build on each other's work. The strategic roadmap is intended to serve as a dynamic tool for partnership, prioritization, and gap assessment. These tools reflect and support CDC's commitment to working with partners on the highest priorities for strengthening the workforce to improve the public's health.

  5. On the road to a stronger public health workforce: visual tools to address complex challenges.

    Science.gov (United States)

    Drehobl, Patricia; Stover, Beth H; Koo, Denise

    2014-11-01

    The public health workforce is vital to protecting the health and safety of the public, yet for years, state and local governmental public health agencies have reported substantial workforce losses and other challenges to the workforce that threaten the public's health. These challenges are complex, often involve multiple influencing or related causal factors, and demand comprehensive solutions. However, proposed solutions often focus on selected factors and might be fragmented rather than comprehensive. This paper describes approaches to characterizing the situation more comprehensively and includes two visual tools: (1) a fishbone, or Ishikawa, diagram that depicts multiple factors affecting the public health workforce; and (2) a roadmap that displays key elements-goals and strategies-to strengthen the public health workforce, thus moving from the problems depicted in the fishbone toward solutions. The visual tools aid thinking about ways to strengthen the public health workforce through collective solutions and to help leverage resources and build on each other's work. The strategic roadmap is intended to serve as a dynamic tool for partnership, prioritization, and gap assessment. These tools reflect and support CDC's commitment to working with partners on the highest priorities for strengthening the workforce to improve the public's health. PMID:25439245

  6. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  7. Oracle TimesTen in-memory Database Integration

    OpenAIRE

    Žitný, Jakub; Potocký, Miroslav

    2014-01-01

    Project Specification The objective is to build an rpm/script/puppet module that will easily deploy TimesTen in-memory database on existing server/cluster. Create script configuring TimesTen in-memory database for usage with specific database/RAC and creating step-by-step document (Twiki+Snow KB) on how to get required data cached in a simple way. Ultimate outcome will be to have a new service to deploy TT caching easily on any puppetized DB server. Abstract TimesTen is in-memory...

  8. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    Directory of Open Access Journals (Sweden)

    Guettler Daniel

    2009-10-01

    Full Text Available Abstract Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the

  9. Chasm grows between rich and poor. Top hospital performers get stronger as weaker competitors stumble, new research indicates.

    Science.gov (United States)

    Moore, J D

    1999-06-01

    Although recent studies show hospital operating margins dropped 45% in the fourth quarter of 1998, compared with the year-ago period, a new analysis of industrywide financial ratios reveals that the "average" hospital may be less representative of the whole. Instead, a chasm is widening around the median, with strong hospitals getting stronger and weak hospitals showing further decline.

  10. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Trypanosomes Database Download First of all, please read the license of this database. Data names and data d...escriptions are about the downloadable data in this page. They might not correspond to the contents of the o...ta name File Simple search and download 1 README README_e.html - 2 Protein trypanosome.zip (1.4 KB) Simple search and download... Downlaod via FTP Joomla SEF URLs by Artio About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  11. What is a lexicographical database?

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Skovgård Nielsen, Jesper

    2013-01-01

    project. Such cooperation will reach the highest level of success if the lexicographer has at least a basic knowledge of the topic presented in this paper: What is a database? This type of knowledge is also needed when the lexicographer describes an ongoing or a finished project. In this article, we......50 years ago, no lexicographer used a database in the work process. Today, almost all dictionary projects incorporate databases. In our opinion, the optimal lexicographical database should be planned in cooperation between a lexicographer and a database specialist in each specific lexicographic...... provide the description of this type of cooperation, using the most important theoretical terms relevant in the planning of a database. It will be made clear that a lexicographical database is like any other database. The only difference is that an optimal lexicographical database is constructed to fulfil...

  12. Open geochemical database

    Science.gov (United States)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are

  13. MammoGrid: a mammography database

    CERN Multimedia

    2002-01-01

    What would be the advantages if physicians around the world could gain access to a unique mammography database? The answer may come from MammoGrid, a three-year project under the Fifth Framework Programme of the EC. Led by CERN, MammoGrid involves the UK (the Universities of Oxford, Cambridge and the West of England, Bristol, plus the company Mirada Solutions of Oxford), and Italy (the Universities of Pisa and Sassari and the Hospitals in Udine and Torino). The aim of the project is, in light of emerging GRID technology, to develop a Europe-wide database of mammograms. The database will be used to investigate a set of important healthcare applications as well as the potential of the GRID to enable healthcare professionals throughout the EU to work together effectively. The contributions of the partners include building the GRID-database infrastructure, developing image processing and Computer Aided Detection techniques, and making the clinical evaluation. The first project meeting took place at CERN in Sept...

  14. CASE STORAGE BASED ON RELATIONAL DATABASE

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focused on the integration of case base and relational database management system (RDBMS). The organizational and commercial impact will be far greater if the case-based reasoning (CBR) system is integrated with main stream information system, which is exemplified by RDBMS. The scalability, security and robustness provided by a commercial RDBMS facilitate the CBR system to manage the case base.The virtual table in relational database (RDB) is important for CBR systems to implement the flexibility of case template. It was discussed how to implement a flexible and succinct case template, and a mapping model between case template and RDB was proposed. The key idea is to build the case as the virtual view of underlying data.

  15. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  16. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  17. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  18. An XCT image database system

    International Nuclear Information System (INIS)

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  19. MMI Face Database

    OpenAIRE

    Maat, L.M.; Sondak, R.C.; Valstar, M.F.; Pantic, M.; Gaia, P.

    2005-01-01

    The automatic recognition of human facial expressions is an interesting research area in AI with a growing number of projects and researchers. In spite of repeated references to the need for a reference set of images that could provide a basis for benchmarking various techniques in automatic facial expression analysis, a readily accessible and complete enough database of face images does not exist yet. This lack represented our main incentive to develop a web-based, easily accessible, and eas...

  20. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  2. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  3. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  4. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  5. Modeling Digital Video Database

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.

  6. Building and Maintaining Halls of Fame Over a Database

    OpenAIRE

    Alvanaki, F.; Michel, S; Stupar, A.

    2012-01-01

    Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities---persons, companies, products, countries etc. Beyond their practical use as static rankings, changes to them are particularly interesting---for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated w...

  7. Long-term perspective underscores need for stronger near-term policies on climate change

    Science.gov (United States)

    Marcott, S. A.; Shakun, J. D.; Clark, P. U.; Mix, A. C.; Pierrehumbert, R.; Goldner, A. P.

    2014-12-01

    Despite scientific consensus that substantial anthropogenic climate change will occur during the 21st century and beyond, the social, economic and political will to address this global challenge remains mired in uncertainty and indecisiveness. One contributor to this situation may be that scientific findings are often couched in technical detail focusing on near-term changes and uncertainties and often lack a relatable long-term context. We argue that viewing near-term changes from a long-term perspective provides a clear demonstration that policy decisions made in the next few decades will affect the Earth's climate, and with it our socio-economic well-being, for the next ten millennia or more. To provide a broader perspective, we present a graphical representation of Earth's long-term climate history that clearly identifies the connection between near-term policy options and the geological scale of future climate change. This long view is based on a combination of recently developed global proxy temperature reconstructions of the last 20,000 years and model projections of surface temperature for the next 10,000 years. Our synthesis places the 20th and 21st centuries, when most emissions are likely to occur, into the context of the last twenty millennia over which time the last Ice Age ended and human civilization developed, and the next ten millennia, over which time the projected impacts will occur. This long-term perspective raises important questions about the most effective adaptation and mitigation policies. For example, although some consider it economically viable to raise seawalls and dikes in response to 21st century sea level change, such a strategy does not account for the need for continuously building much higher defenses in the 22nd century and beyond. Likewise, avoiding tipping points in the climate system in the short term does not necessarily imply that such thresholds will not still be crossed in the more distant future as slower components

  8. Building Inclusion

    NARCIS (Netherlands)

    Jeanet Kullberg; Isik Kulu-Glasgow

    2009-01-01

    The social inclusion of immigrants and ethnic minorities is a central issue in many European countries. Governments face challenges in ensuring housing for immigrants, delivering public services, promoting neighbourhood coexistence and addressing residential segregation. The Building Inclusion proje

  9. Building Languages

    Science.gov (United States)

    ... family's native language) is taught as the child's second language through reading, writing, speech, and use of residual ... that parents can use to help their child learn language. There are many types of building blocks, and ...

  10. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  11. Human cancer databases (review).

    Science.gov (United States)

    Pavlopoulou, Athanasia; Spandidos, Demetrios A; Michalopoulos, Ioannis

    2015-01-01

    Cancer is one of the four major non‑communicable diseases (NCD), responsible for ~14.6% of all human deaths. Currently, there are >100 different known types of cancer and >500 genes involved in cancer. Ongoing research efforts have been focused on cancer etiology and therapy. As a result, there is an exponential growth of cancer‑associated data from diverse resources, such as scientific publications, genome‑wide association studies, gene expression experiments, gene‑gene or protein‑protein interaction data, enzymatic assays, epigenomics, immunomics and cytogenetics, stored in relevant repositories. These data are complex and heterogeneous, ranging from unprocessed, unstructured data in the form of raw sequences and polymorphisms to well‑annotated, structured data. Consequently, the storage, mining, retrieval and analysis of these data in an efficient and meaningful manner pose a major challenge to biomedical investigators. In the current review, we present the central, publicly accessible databases that contain data pertinent to cancer, the resources available for delivering and analyzing information from these databases, as well as databases dedicated to specific types of cancer. Examples for this wealth of cancer‑related information and bioinformatic tools have also been provided. PMID:25369839

  12. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  13. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  14. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  15. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  16. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  17. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ational Institute of Agrobiological Sciences E-mail : Database classification Nucleotide Sequence Databases ...s. Background and funding This database was constructed using the research results of the National Institute of Agrobiological... information Database maintenance site National Institute of Agrobiological Sciences URL of the original web... Sciences and the University of Tokyo. Pioneer Research Project (2001-2003) of the National Institute of Agrobiolog...ical Sciences Grant-in-Aid for Scientific Research (2001-2003) of the University of

  18. Physical database design for an object-oriented database system

    OpenAIRE

    Scholl, Marc H.

    1994-01-01

    Object-oriented database systems typically offer a variety of structuring capabilities to model complex objects. This flexibility, together with type (or class) hierarchies and computed "attributes"§ (methods), poses a high demand on the physical design of object-oriented databases. Similar to traditional databases, it is hardly ever true that the conceptual structure of the database is also a good, that is, effcient, internal one. Rather, data representing the conceptual objects may be stru...

  19. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao;

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  20. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  1. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model, Delaunay triangulation

  2. Shark Mark Recapture Database (MRDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Shark Mark Recapture Database is a Cooperative Research Program database system used to keep multispecies mark-recapture information in a common format for...

  3. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  4. Techniques for multiple database integration

    OpenAIRE

    Whitaker, Barron D

    1997-01-01

    Approved for public release; distribution is unlimited There are several graphic client/server application development tools which can be used to easily develop powerful relational database applications. However, they do not provide a direct means of performing queries which require relational joins across multiple database boundaries. This thesis studies ways to access multiple databases. Specifically, it examines how a 'cross-database join' can be performed. A case study of techniques us...

  5. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  6. DRAM BASED PARAMETER DATABASE OPTIMIZATION

    OpenAIRE

    Marcinkevicius, Tadas

    2012-01-01

    This thesis suggests an improved parameter database implementation for one of Ericsson products. The parameter database is used during the initialization of the system as well as during the later operation. The database size is constantly growing because the parameter database is intended to be used with different hardware configurations. When a new technology platform is released, multiple revisions with additional features and functionalities are later created, resulting in introduction of ...

  7. Hydrogen Leak Detection Sensor Database

    Science.gov (United States)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  8. A database of major breakwaters around the world

    NARCIS (Netherlands)

    Allsop, N.W.H.; Cork, R.S.; Verhagen, H.J.

    2009-01-01

    This paper introduces a co-operative project between HR Wallingford UK (HRW) and Delft University of Technology, Netherlands, (TUD) to develop, populate, and then to apply a database on all major breakwaters around the world. It builds on, and revives, similar initiatives that originate in the late

  9. The Database Query Support Processor (QSP)

    Science.gov (United States)

    1993-01-01

    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide

  10. On Simplifying Features in OpenStreetMap database

    Science.gov (United States)

    Qian, Xinlin; Tao, Kunwang; Wang, Liang

    2015-04-01

    Currently the visualization of OpenStreetMap data is using a tile server which stores map tiles that have been rendered from vector data in advance. However, tiled map are short of functionalities such as data editing and customized styling. To enable these advanced functionality, Client-side processing and rendering of geospatial data is needed. Considering the voluminous size of the OpenStreetMap data, simply sending region queries results of OSM database to client is prohibitive. To make the OSM data retrieved from database adapted for client receiving and rendering, It must be filtered and simplified at server-side to limit its volume. We propose a database extension for OSM database to make it possible to simplifying geospatial objects such as ways and relations during data queries. Several auxiliary tables and PL/pgSQL functions are presented to make the geospatial features can be simplified by omitting unimportant vertices. There are five components in the database extension: Vertices weight computation by polyline and polygon simplification algorithm, Vertices weight storage in auxiliary tables. filtering and selecting of vertices using specific threshold value during spatial queries, assembling of simplified geospatial objects using filtered vertices, vertices weight updating after geospatial objects editing. The database extension is implemented on an OSM APIDB using PL/pgSQL. The database contains a subset of OSM database. The experimental database contains geographic data of United Kingdom which is about 100 million vertices and roughly occupy 100GB disk. JOSM are used to retrieve the data from the database using a revised data accessing API and render the geospatial objects in real-time. When serving simplified data to client, The database allows user to set the bound of the error of simplification or the bound of responding time in each data query. Experimental results show the effectiveness and efficiency of the proposed methods in building a

  11. Caching in Multidimensional Databases

    OpenAIRE

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at lea...

  12. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus;

    2012-01-01

    Rett syndrome (RTT) is a neurodevelopmental disorder with one principal phenotype and several distinct, atypical variants (Zappella, early seizure onset and congenital variants). Mutations in MECP2 are found in most cases of classic RTT but at least two additional genes, CDKL5 and FOXG1, can...... underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...

  13. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  14. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  15. Databases for Data Mining

    OpenAIRE

    LANGOF, LADO

    2015-01-01

    This work is about looking for synergies between data mining tools and databa\\-se management systems (DBMS). Imagine a situation where we need to solve an analytical problem using data that are too large to be processed solely inside the main physical memory and at the same time too small to put data warehouse or distributed analytical system in place. The target area is therefore a single personal computer that is used to solve data mining problems. We are looking for tools that allows us to...

  16. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  17. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  18. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    Directory of Open Access Journals (Sweden)

    A. Düsterhus

    2015-06-01

    Full Text Available Sea-level and ice-sheet databases are essential tools for evaluating palaeoclimatic changes. However, database creation poses considerable challenges and problems related to the composition and needs of scientific communities creating raw data, the compiliation of the database, and finally using it. There are also issues with data standardisation and database infrastructure, which should make the database easy to understand and use with different layers of complexity. Other challenges are correctly assigning credit to original authors, and creation of databases that are centralised and maintained in long-term digital archives. Here, we build on the experience of the PALeo constraints on SEA level rise (PALSEA community by outlining strategies for designing a self-consistent and standardised database of changes in sea level and ice sheets, identifying key points that need attention when undertaking the task of database creation.

  19. Databases in Indian biology: The state of the art and prospects

    Digital Repository Service at National Institute of Oceanography (India)

    Chavan, V.S.; Chandramohan, D.

    the Indian biology and biotechnology databses and their relation to international databases on the subject. It highlights their limitations and throws more light on their potential for subject experts and information managers in the country to build...

  20. Competence Building

    DEFF Research Database (Denmark)

    Borrás, Susana; Edquist, Charles

    The main question that guides this paper is how governments are focusing (and must focus) on competence building (education and training) when designing and implementing innovation policies. With this approach, the paper aims at filling the gap between the existing literature on competences...... on the one hand, and the real world of innovation policy-making on the other, typically not speaking to each other. With this purpose in mind, this paper discusses the role of competences and competence-building in the innovation process from a perspective of innovation systems; it examines how governments...... and public agencies in different countries and different times have actually approached the issue of building, maintaining and using competences in their innovation systems; it examines what are the critical and most important issues at stake from the point of view of innovation policy, looking particularly...

  1. Building Procurement

    DEFF Research Database (Denmark)

    Andersson, Niclas

    2007-01-01

    ‘The procurement of construction work is complex, and a successful outcome frequently elusive’. With this opening phrase of the book, the authors take on the challenging job of explaining the complexity of building procurement. Even though building procurement systems are, and will remain, complex...... despite this excellent book, the knowledge, expertise, well-articulated argument and collection of recent research efforts that are provided by the three authors will help to make project success less elusive. The book constitutes a thorough and comprehensive investigation of building procurement, which...... evolves from a simple establishment of a contractual relationship to a central and strategic part of construction. The authors relate to cultural, ethical and social and behavioural sciences as the fundamental basis for analysis and understanding of the complexity and dynamics of the procurement system...

  2. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390......, reasoning and learning, network management and mobile systems, expert systems and decision support, and information modelling....... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  3. Federated Spatial Databases and Interoperability

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given

  4. Databases as an information service

    Science.gov (United States)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  5. Building Bridges

    DEFF Research Database (Denmark)

    The report Building Bridges adresses the questions why, how and for whom academic audience research has public value, from the different points of view of the four working groups in the COST Action IS0906 Transforming Audiences, Transforming Societies – “New Media Genres, Media Literacy and Trust...... in the Media”, “Audience Interactivity and Participation”, “The Role of Media and ICT Use for Evolving Social Relationships” and “Audience Transformations and Social Integration”. Building Bridges is the result of an ongoing dialogue between the Action and non-academic stakeholders in the field of audience...

  6. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available athological image database, over 53,000 high-resolution whole slide digital pathological images of liver and...o the pathological findings registered on Open TG-GATEs database, with the pathological images. For further ...bc.jp/en/open-tggates/desc.html . Features and manner of utilization of database The pathological images

  7. Caching in Multidimensional Databases

    CERN Document Server

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined ...

  8. IPD: the Immuno Polymorphism Database.

    Science.gov (United States)

    Robinson, James; Marsh, Steven G E

    2007-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer cell immunoglobulin-like receptors (KIRs); IPD-MHC, a database of sequences of the major histocompatibility complex (MHC) of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTAB, which provides access to the European Searchable Tumour Cell Line Database, a cell bank of immunologically characterized melanoma cell lines. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. Those sections with similar data, such as IPD-KIR and IPD-MHC, share the same database structure. PMID:18449992

  9. SHORT SURVEY ON GRAPHICAL DATABASE

    Directory of Open Access Journals (Sweden)

    Harsha R Vyavahare

    2015-08-01

    Full Text Available This paper explores the features of graph databases and data models. The popularity towards work with graph models and datasets has been increased in the recent decades .Graph database has a number of advantage over the relational database. This paper take a short review on the graph and hyper graph concepts from mathematics so that graph so that we can understand the existing difficulties in the implantation of graph model. From the Past few decades saw hundreds of research contributions their vast research in the DBS field with graph database. However, the research on the existence of general purpose DBS managements and mining that suits for variety of applications is still very much active. The review is done based on the Application of graph model techniques in the database within the framework of graph based approaches with the aim of implementation of different graphical database and tabular database

  10. Designing a Multi-Petabyte Database for LSST

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei; Abdulla, Ghaleb; Szalay, Alex; Nieto-Santisteban, Maria; Thakar, Ani; Gray, Jim; /SLAC

    2007-01-10

    The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are being evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.

  11. A Components Database Design and Implementation for Accelerators and Detectors.

    Science.gov (United States)

    Chan, A.; Meyer, S.

    1997-05-01

    Many accelerator and detector systems being fabricated for the PEP-II Accelerator and BaBar Detector needed configuration control and calibration measurements tracked for their components. Instead of building a database for each distinct system, a Components Database was designed and implemented that can encompass any type of component and any type of measurement. In this paper we describe this database design which is especially suited for the engineering and fabrication processes of the accelerator and detector environments where there are thousands of unique component types. We give examples of information stored in the Components Database, which includes accelerator configuration, calibration measurements, fabrication history, design specifications, inventory, etc. The World Wide Web interface is used to access the data, and templates are available for international collaborations to collect data off-line.

  12. Building Bridges

    DEFF Research Database (Denmark)

    The report Building Bridges adresses the questions why, how and for whom academic audience research has public value, from the different points of view of the four working groups in the COST Action IS0906 Transforming Audiences, Transforming Societies – “New Media Genres, Media Literacy and Trust...

  13. Sustainable Buildings

    DEFF Research Database (Denmark)

    Tommerup, Henrik M.; Elle, Morten

    The scientific community agrees that: all countries must drastically and rapidly reduce their CO2 emissions and that energy efficient houses play a decisive role in this. The general attitude at the workshop on Sustainable Buildings was that we face large and serious climate change problems...

  14. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  15. Indexing, learning and content-based retrieval for special purpose image databases

    OpenAIRE

    Huiskes, Mark; Pauwels, Eric

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by taking a tour along the entire

  16. CarbBank: A structural and bibliographic database for complex carbohydrates

    Energy Technology Data Exchange (ETDEWEB)

    Albersheim, P.

    1992-06-01

    The CarbBank project has several key facets: building a computer database that presents carbohydrate sequence information derived from the published literature, programming to create computer applications that use the information in the database, creating software for multiple computer platforms, and distributing software to end users.

  17. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2005-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  18. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  19. The integrated web service and genome database for agricultural plants with biotechnology information

    Science.gov (United States)

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  20. An approach to the Optimization of menu-based Natural Language Interfaces to Databases

    OpenAIRE

    Fiaz Majeed; Shoaib, M.; Fasiha Ashraf

    2011-01-01

    Natural language interfaces to databases (NLIDB) facilitate the user to state query to database in natural language. NLIDB then interprets the natural language query into Structured Query Language (SQL) to perform action on target database. Menubased NLIDB provides restricted set of elements on screen that are utilized to build natural language query. The latest menubased NLIDB's use WYSIWYM interfaces that focus on automatic formation of popup menus relevant to typed word on editor. The auto...

  1. Reshaping Smart Businesses with Cloud Database Solutions

    Directory of Open Access Journals (Sweden)

    Bogdan NEDELCU

    2015-03-01

    Full Text Available The aim of this article is to show the importance of Big Data and its growing influence on companies. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. In this big data era, there is a fiercely competition between the companies and the technologies they use when building their strategies. There are almost no boundaries when it comes to the possibilities and facilities some databases can offer. However, the most challenging part lays in the development of efficient solutions - where and when to take the right decision, which cloud service is the most accurate being given a certain scenario, what database is suitable for the business taking in consideration the data types. These are just a few aspects which will be dealt with in the following chapters as well as exemplifications of the most accurate cloud services (e.g. NoSQL databases used by business leaders nowadays.

  2. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2014-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  3. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2013-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  4. Ageing Management Program Database

    International Nuclear Information System (INIS)

    The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)

  5. Database Programming Languages

    DEFF Research Database (Denmark)

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...... to present the very best work at the intersection of dataase and programming language research. The proceedings include a paper based on the invited talk by Wenfie Fan and the 16 contributed papers that were selected by at least three members of the program committee. In addition, the program commitee sought...... the opinions of additional referees selected becauce of their expertise on particular topics. The final selection of papers was made during last week of July. We would like to thank all of the aurhors who submitted papers to the conference, and the members of the program committee  for their excellent work...

  6. The Danish Anaesthesia Database

    Directory of Open Access Journals (Sweden)

    Antonsen K

    2016-10-01

    Full Text Available Kristian Antonsen,1 Charlotte Vallentin Rosenstock,2 Lars Hyldborg Lundstrøm2 1Board of Directors, Copenhagen University Hospital, Bispebjerg and Frederiksberg Hospital, Capital Region of Denmark, Denmark; 2Department of Anesthesiology, Copenhagen University Hospital, Nordsjællands Hospital-Hillerød, Capital Region of Denmark, Denmark Aim of database: The aim of the Danish Anaesthesia Database (DAD is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. Study population: The DAD was founded in 2004 as a part of Danish Clinical Registries (Regionernes Kliniske Kvalitetsudviklings Program [RKKP]. Patients undergoing general anesthesia, regional anesthesia with or without combined general anesthesia as well as patients under sedation are registered. Data are retrieved from public and private anesthesia clinics, single-centers as well as multihospital corporations across Denmark. In 2014 a total of 278,679 unique entries representing a national coverage of ~70% were recorded, data completeness is steadily increasing. Main variable: Records are aggregated for determining 13 defined quality indicators and eleven defined complications all covering the anesthetic process from the preoperative assessment through anesthesia and surgery until the end of the postoperative recovery period. Descriptive data: Registered variables include patients' individual social security number (assigned to all Danes and both direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment. In addition, an annual DAD report is a benchmark for departments nationwide. Conclusion: The DAD is covering the

  7. Database design for a kindergarten Pastelka

    OpenAIRE

    Grombíř, Tomáš

    2010-01-01

    This bachelor thesis deals with analysis, creation of database for a kindergarten and installation of the designed database into the database system MySQL. Functionality of the proposed database was verified through an application written in PHP.

  8. Developing policies for green buildings: what can the United States learn from the Netherlands?

    Directory of Open Access Journals (Sweden)

    Rebecca Retzlaff

    2010-07-01

    Full Text Available Political jurisdictions in the United States have begun to develop plans that address green buildings, a topic on which the Netherlands has extensive experience. This article analyzes the literature on Dutch green buildings to look for lessons that might be relevant for the development of polices in the United States. Through a metasynthesis of seventeen studies on green building policies in the Netherlands, the study identifies patterns in the literature and creates a holistic interpretation. These data are compared with the literature on green building policies in the United States. The article concludes that guidance from the federal government?including a stronger research agenda for green building policy issues?could help spur innovation. Reliance on voluntary green building certification has very limited potential and stronger regulations are needed in the United States to minimize the environmental impacts of buildings. A flexible, broad policy system is also required.

  9. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  10. Building Letters

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Cabinet是种十分吸引人却很简单的衬线字体,是由一名匿名字体设计师专门为Building Letters最新的资金筹集活动所设计的。这个Building Letters包中包含一个CDROM,有32种字体,以及一本专门设计的杂志和两张由Eboy和Emigre所设计的海报。字体光盘样例是由世界顶级的字体设计师们设计的.

  11. Application of graph database for analytical tasks

    OpenAIRE

    Günzl, Richard

    2014-01-01

    This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to desi...

  12. Glass Stronger than Steel

    Science.gov (United States)

    Yarris, Lynn

    2011-03-28

    A new type of damage-tolerant metallic glass, demonstrating a strength and toughness beyond that of steel or any other known material, has been developed and tested by a collaboration of researchers from Berkeley Lab and Caltech.

  13. Biological Databases for Behavioral Neurobiology

    OpenAIRE

    Baker, Erich J

    2012-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the variou...

  14. Database Preservation: The DBPreserve Approach

    OpenAIRE

    Arif Ur Rahman; Muhammad Muzammal; Gabriel David; Cristina Ribeiro

    2015-01-01

    In many institutions relational databases are used as a tool for managing information related to day to day activities. Institutions may be required to keep the information stored in relational databases accessible because of many reasons including legal requirements and institutional policies. However, the evolution in technology and change in users with the passage of time put the information stored in relational databases in danger. In the long term the information may become inaccessible ...

  15. Performance Introspection of Graph Databases

    OpenAIRE

    Macko, Peter; Margo, Daniel Wyatt; Seltzer, Margo I.

    2013-01-01

    The explosion of graph data in social and biological networks, recommendation systems, provenance databases, etc. makes graph storage and processing of paramount importance. We present a performance introspection framework for graph databases, PIG, which provides both a toolset and methodology for understanding graph database performance. PIG consists of a hierarchical collection of benchmarks that compose to produce performance models; the models provide a way to illuminate the strengths and...

  16. SHORT SURVEY ON GRAPHICAL DATABASE

    OpenAIRE

    Harsha R Vyavahare; Dr.P.P.Karde

    2015-01-01

    This paper explores the features of graph databases and data models. The popularity towards work with graph models and datasets has been increased in the recent decades .Graph database has a number of advantage over the relational database. This paper take a short review on the graph and hyper graph concepts from mathematics so that graph so that we can understand the existing difficulties in the implantation of graph model. From the Past few decades saw hundreds of research contributions the...

  17. Inorganic Crystal Structure Database (ICSD)

    Science.gov (United States)

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  18. Model Building

    OpenAIRE

    Frampton, Paul H.

    1997-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly...

  19. Building economics

    DEFF Research Database (Denmark)

    Pedersen, D.O.(red.)

    Publikationen er på engelsk. Den omfatter alle indlæg på det fjerde internationale symposium om byggeøkonomi, der blev arrangeret af SBI for det internationale byggeforskningsråd CIB. De fem bind omhandler: Methods of Economic Evaluation, Design Optimization, Ressource Utilization, The Building...... Market og Economics and Technological Forecasting in Construction. Et indledende bind bringer statusrapporter for de fem forskningsområder, og det sidste bind sammenfatter debatten på symposiet....

  20. Geomagnetic Observatory Database February 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA National Centers for Environmental Information (formerly National Geophysical Data Center) maintains an active database of worldwide geomagnetic...

  1. Cloud database development and management

    CERN Document Server

    Chao, Lee

    2013-01-01

    Nowadays, cloud computing is almost everywhere. However, one can hardly find a textbook that utilizes cloud computing for teaching database and application development. This cloud-based database development book teaches both the theory and practice with step-by-step instructions and examples. This book helps readers to set up a cloud computing environment for teaching and learning database systems. The book will cover adequate conceptual content for students and IT professionals to gain necessary knowledge and hands-on skills to set up cloud based database systems.

  2. Working with Documents in Databases

    Directory of Open Access Journals (Sweden)

    Marian DARDALA

    2008-01-01

    Full Text Available Using on a larger and larger scale the electronic documents within organizations and public institutions requires their storage and unitary exploitation by the means of databases. The purpose of this article is to present the way of loading, exploitation and visualization of documents in a database, taking as example the SGBD MSSQL Server. On the other hand, the modules for loading the documents in the database and for their visualization will be presented through code sequences written in C#. The interoperability between averages will be carried out by the means of ADO.NET technology of database access.

  3. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard;

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  4. Constructing a global noise correlation database

    Science.gov (United States)

    Ermert, L. A.; Fichtner, A.; Sleeman, R.

    2013-12-01

    We report on the ongoing construction of an extensive global-scale database of ambient noise cross-correlation functions spanning a frequency range from seismic hum to oceanic microseisms (roughly 2 mHz to 0.2 Hz). The database - ultimately to be hosted by ORFEUS - will be used to study the distribution of microseismic and hum sources, and to perform multiscale full waveform inversion for crustal and mantle structure. To build the database, we acquire continuous time series data from permanent and temporary networks hosted mostly at IRIS and ORFEUS. We process and correlated the time series using a fully parallelised tool based on the Python package Obspy. Processing follows two main flows: We obtain both classical cross-correlation functions and phase cross-correlation functions. Phase cross-correlation is an amplitude-independent measure of waveform similarity. Either type of correlation can be used for the inversions. We stack individual time windows linearly. Additionally, we calculate the stack of instantaneous phases of the analytic cross-correlation signal, which can be included as optional processing step. Multiplying the linear stack by the phase stack downweights those parts of the linear stack that show little phase coherency. Thus, it accelerates the emergence of weak coherent signals, which is of particular importance for the processing of data from recently deployed or temporary stations that have only been recording for a short time. Obtaining and processing data for such a massive database requires considerable computational resources, offered by the Swiss National Supercomputing Centre (CSCS) in the form of HPC clusters specifically designed for large-scale data analysis. The data set will be made available to the scientific community via ORFEUS. By separately providing classical cross-correlation, phase cross-correlation and instantaneous phase stack, the database will offer relative flexibility for application in further studies. Many current

  5. Where Two Are Fighting, the Third Wins: Stronger Selection Facilitates Greater Polymorphism in Traits Conferring Competition-Dispersal Tradeoffs.

    Directory of Open Access Journals (Sweden)

    Adam Lampert

    Full Text Available A major conundrum in evolution is that, despite natural selection, polymorphism is still omnipresent in nature: Numerous species exhibit multiple morphs, namely several abundant values of an important trait. Polymorphism is particularly prevalent in asymmetric traits, which are beneficial to their carrier in disruptive competitive interference but at the same time bear disadvantages in other aspects, such as greater mortality or lower fecundity. Here we focus on asymmetric traits in which a better competitor disperses fewer offspring in the absence of competition. We report a general pattern in which polymorphic populations emerge when disruptive selection increases: The stronger the selection, the greater the number of morphs that evolve. This pattern is general and is insensitive to the form of the fitness function. The pattern is somewhat counterintuitive since directional selection is excepted to sharpen the trait distribution and thereby reduce its diversity (but note that similar patterns were suggested in studies that demonstrated increased biodiversity as local selection increases in ecological communities. We explain the underlying mechanism in which stronger selection drives the population towards more competitive values of the trait, which in turn reduces the population density, thereby enabling lesser competitors to stably persist with reduced need to directly compete. Thus, we believe that the pattern is more general and may apply to asymmetric traits more broadly. This robust pattern suggests a comparative, unified explanation to a variety of polymorphic traits in nature.

  6. Where Two Are Fighting, the Third Wins: Stronger Selection Facilitates Greater Polymorphism in Traits Conferring Competition-Dispersal Tradeoffs.

    Science.gov (United States)

    Lampert, Adam; Tlusty, Tsvi

    2016-01-01

    A major conundrum in evolution is that, despite natural selection, polymorphism is still omnipresent in nature: Numerous species exhibit multiple morphs, namely several abundant values of an important trait. Polymorphism is particularly prevalent in asymmetric traits, which are beneficial to their carrier in disruptive competitive interference but at the same time bear disadvantages in other aspects, such as greater mortality or lower fecundity. Here we focus on asymmetric traits in which a better competitor disperses fewer offspring in the absence of competition. We report a general pattern in which polymorphic populations emerge when disruptive selection increases: The stronger the selection, the greater the number of morphs that evolve. This pattern is general and is insensitive to the form of the fitness function. The pattern is somewhat counterintuitive since directional selection is excepted to sharpen the trait distribution and thereby reduce its diversity (but note that similar patterns were suggested in studies that demonstrated increased biodiversity as local selection increases in ecological communities). We explain the underlying mechanism in which stronger selection drives the population towards more competitive values of the trait, which in turn reduces the population density, thereby enabling lesser competitors to stably persist with reduced need to directly compete. Thus, we believe that the pattern is more general and may apply to asymmetric traits more broadly. This robust pattern suggests a comparative, unified explanation to a variety of polymorphic traits in nature.

  7. An improved piezoelectric harvester available in scavenging-energy from the operating environment with either weaker or stronger vibration levels

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    An improved harvester available in scavenging energy from the operating environment with either weaker or stronger vibration levels is studied. To ensure the optimal harvester performance, a Cuk dc-dc converter is employed into the modulating circuit. This paper reports how this harvester scav- enges maximal energy from varying-level vibrations and store energy into an electrochemical battery. Dependence of the duty cycle upon the external vibration level is calculated, and it is found that: 1) for weaker vibrations, the charging current into the battery is smaller than the allowable current, and thus all the optimal output power of the harvesting structure can be absorbed by the battery. In this case, the duty cycle should be fixed at 1.86%; 2) for stronger external forcing, the allowable charging current of the battery is smaller than the optimal harvested current. This indicates that just a portion of the sca- venged energy can be accepted by the battery. Thus, the duty cycle should be decreased gradually with the increase of the vibration level. Finally the energy transmission process and the roles of each elec- tronic element are analyzed. It is shown that a Cuk converter can greatly raise the efficiency of such a harvester, particularly when subjected to a weaker ambient vibration.

  8. An improved piezoelectric harvester available in scavenging-energy from the operating environment with either weaker or stronger vibration levels

    Institute of Scientific and Technical Information of China (English)

    XUE Huan; HU HongPing; HU YuanTai; CHEN XueDong

    2009-01-01

    An improved harvester available in scavenging energy from the operating environment with either weaker or stronger vibration levels is studied. To ensure the optimal harvester performance, a Cuk dc-dc converter is employed into the modulating circuit. This paper reports how this harvester scav-enges maximal energy from varying-level vibrations and store energy into an electrochemical battery. Dependence of the duty cycle upon the external vibration level is calculated, and it is found that: 1) for weaker vibrations, the charging current into the battery is smaller than the allowable current, and thus all the optimal output power of the harvesting structure can be absorbed by the battery. In this case, the duty cycle should be fixed at 1.86%; 2) for stronger external forcing, the allowable charging current of the battery is smaller than the optimal harvested current. This indicates that just a portion of the scav-enged energy can be accepted by the battery. Thus, the duty cycle should be decreased gradually with the increase of the vibration level. Finally the energy transmission process and the roles of each electronic element are analyzed. It is shown that a Cuk converter can greatly raise the efficiency of such a harvester, particularly when subjected to a weaker ambient vibration.

  9. KIR2DL2/2DL3-E35 alleles are functionally stronger than -Q35 alleles

    Science.gov (United States)

    Bari, Rafijul; Thapa, Rajoo; Bao, Ju; Li, Ying; Zheng, Jie; Leung, Wing

    2016-03-01

    KIR2DL2 and KIR2DL3 segregate as alleles of a single locus in the centromeric motif of the killer cell immunoglobulin-like receptor (KIR) gene family. Although KIR2DL2/L3 polymorphism is known to be associated with many human diseases and is an important factor for donor selection in allogeneic hematopoietic stem cell transplantation, the molecular determinant of functional diversity among various alleles is unclear. In this study we found that KIR2DL2/L3 with glutamic acid at position 35 (E35) are functionally stronger than those with glutamine at the same position (Q35). Cytotoxicity assay showed that NK cells from HLA-C1 positive donors with KIR2DL2/L3-E35 could kill more target cells lacking their ligands than NK cells with the weaker -Q35 alleles, indicating better licensing of KIR2DL2/L3+ NK cells with the stronger alleles. Molecular modeling analysis reveals that the glutamic acid, which is negatively charged, interacts with positively charged histidine located at position 55, thereby stabilizing KIR2DL2/L3 dimer and reducing entropy loss when KIR2DL2/3 binds to HLA-C ligand. The results of this study will be important for future studies of KIR2DL2/L3-associated diseases as well as for donor selection in allogeneic stem cell transplantation.

  10. Which global stock indices trigger stronger contagion risk in the Vietnamese stock market? Evidence using a bivariate analysis

    Directory of Open Access Journals (Sweden)

    Wang Kuan-Min

    2013-01-01

    Full Text Available This paper extends recent investigations into risk contagion effects on stock markets to the Vietnamese stock market. Daily data spanning October 9, 2006 to May 3, 2012 are sourced to empirically validate the contagion effects between stock markets in Vietnam, and China, Japan, Singapore, and the US. To facilitate the validation of contagion effects with market-related coefficients, this paper constructs a bivariate EGARCH model of dynamic conditional correlation coefficients. Using the correlation contagion test and Dungey et al.’s (2005 contagion test, we find contagion effects between the Vietnamese and four other stock markets, namely Japan, Singapore, China, and the US. Second, we show that the Japanese stock market causes stronger contagion risk in the Vietnamese stock market compared to the stock markets of China, Singapore, and the US. Finally, we show that the Chinese and US stock markets cause weaker contagion effects in the Vietnamese stock market because of stronger interdependence effects between the former two markets.

  11. The eNanoMapper database for nanomaterial safety information

    Directory of Open Access Journals (Sweden)

    Nina Jeliazkova

    2015-07-01

    Full Text Available Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs. Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API, and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state

  12. Abstract databases in nuclear medicine; New database for articles not indexed in PubMed

    International Nuclear Information System (INIS)

    large number of abstracts and servicing a larger user-community. The database is placed at the URL: http://www.nucmediex.net. We hope that nuclear medicine professionals will contribute building this database and that it will be valuable source of information. (author)

  13. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  14. A database devoted to the insects of the cultural heritage

    Directory of Open Access Journals (Sweden)

    Fabien Fohrer

    2011-08-01

    Full Text Available This database, implemented by both the CICRP and the INRA, gathers the most important pests affecting the cultural heritage. These insects represent a serious threat to the preservation of cultural properties such as museum collections, libraries and archives, movable objects and immovable objects in historical buildings. It is an easy tool for identifying the species of interest. It also permits very prompt undertaking of the required actions against the infestations. This database is of interest to any professional in charge of the conservation of the cultural heritage along with any other professional or scientist interested in these subjects.

  15. A thermodynamic database for geophysical applications

    Science.gov (United States)

    Saxena, S. K.

    2013-12-01

    Several thermodynamic databases are available for calculation of equilibrium reference state of the model earth. Prominent among these are the data bases of (a) SLB (1), (b) HP (2) and (c) FSPW (3). The two major problems, as discussed in a meeting of the database scientists (4), lie in the formulation of solid solutions and equations of state. The models adopted in databases (1) and (2) do not account for multi-components in natural solids and the sub-lattice or compound-energy models used in (3) require lot of fictive compound and mixing energy data for which there is no present ongoing attempt. The EOS formulation in (1) is based on Mie-Gruneisen equation of state and in (2) on modification of Tait EOS with limited parameters. The database (3) adopted the Birch-Murnaghan EOS and used it for high temperature by making compressibility a function of temperature. The (2) and (3) models lead to physically unacceptable values of entropy and heat capacity at extreme conditions. The problem is as much associated with the EOS formulation as with the adoption of a heat capacity change with temperature at 1 bar as discussed by Brosh (5). None of the databases (1), (2) or (3) include the database on multicomponent fluid at extreme conditions. These problems have been addressed in the new database modified after (3). It retains the solution models for solids as in (3) and adds the Brosh Model (5) for solid solutions and the Belonoshko et al (6) model for 13-component C-H-O-S fluid. The Superfluid model builds on the combination of experimental data on pure and mixed fluids at temperatures lower than 1000 K over several kilobars and molecular dynamics generated data at extreme conditions and has been found to be consistent with all the recent experimental data. New high pressure experiments on dissociation of volatile containing solids using laser- and externally-heated DAC are being conducted to obtain new pressure-volume-temperature data on fluids to extend the current kb

  16. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    Science.gov (United States)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  17. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  18. Hanford Site technical baseline database

    Energy Technology Data Exchange (ETDEWEB)

    Porter, P.E., Westinghouse Hanford

    1996-05-10

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of May 10, 1996. The cassette tape also includes the delta files that delineate the differences between this revision and revision 3 (April 10, 1996) of the Hanford Site Technical Baseline Database.

  19. Hanford Site technical baseline database

    Energy Technology Data Exchange (ETDEWEB)

    Porter, P.E.

    1996-09-30

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of September 30, 1996. The cassette tape also includes the delta files that dellinate the differences between this revision and revision 4 (May 10, 1996) of the Hanford Site Technical Baseline Database.

  20. EXPERIMENTAL EVALUATION OF NOSQL DATABASES

    Directory of Open Access Journals (Sweden)

    Veronika Abramova

    2014-10-01

    Full Text Available Relational databases are a technology used universally that enables storage, management and retrieval of varied data schemas. However, execution of requests can become a lengthy and inefficient process for some large databases. Moreover, storing large amounts of data requires servers with larger capacities and scalability capabilities. Relational databases have limitations to deal with scalability for large volumes of data. On the other hand, non-relational database technologies, also known as NoSQL, were developed to better meet the needs of key-value storage of large amounts of records. But there is a large amount of NoSQL candidates, and most have not been compared thoroughly yet. The purpose of this paper is to compare different NoSQL databases, to evaluate their performance according to the typical use for storing and retrieving data. We tested 10 NoSQL databases with Yahoo! Cloud Serving Benchmark using a mix ofoperations to better understand the capability of non-relational databases for handling different requests, and to understand how performance is affected by each database type and their internal mechanisms.

  1. One Database, Four Monofunctional Dictionaries

    DEFF Research Database (Denmark)

    Bergenholtz, Inger; Bergenholtz, Henning

    2013-01-01

    and for a certain user group for certain needs. This paper will argue for the need of dictionary designs for monofunctional dictionaries. Doing that, we need to be aware of the fact that a lexicographical database is not a dictionary. A database contains data which can be presented in one or more monofunctional...

  2. XCOM: Photon Cross Sections Database

    Science.gov (United States)

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z energies from 1 keV to 100 GeV.

  3. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content...

  4. A Molecular Biology Database Digest

    OpenAIRE

    Bry, François; Kröger, Peer

    2000-01-01

    Computational Biology or Bioinformatics has been defined as the application of mathematical and Computer Science methods to solving problems in Molecular Biology that require large scale data, computation, and analysis [18]. As expected, Molecular Biology databases play an essential role in Computational Biology research and development. This paper introduces into current Molecular Biology databases, stressing data modeling, data acquisition, data retrieval, and the integration...

  5. Mathematical Notation in Bibliographic Databases.

    Science.gov (United States)

    Pasterczyk, Catherine E.

    1990-01-01

    Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…

  6. Content independence in multimedia databases

    NARCIS (Netherlands)

    Vries, A.P. de

    2001-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for the design

  7. The magnet components database system

    Energy Technology Data Exchange (ETDEWEB)

    Baggett, M.J. (Brookhaven National Lab., Upton, NY (USA)); Leedy, R.; Saltmarsh, C.; Tompkins, J.C. (Superconducting Supercollider Lab., Dallas, TX (USA))

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs.

  8. The Danish Fetal Medicine Database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte K; Petersen, Olav B; Jørgensen, Finn S;

    2015-01-01

    OBJECTIVE: To describe the establishment and organization of the Danish Fetal Medicine Database and to report national results of first-trimester combined screening for trisomy 21 in the 5-year period 2008-2012. DESIGN: National register study using prospectively collected first-trimester screening...... data from the Danish Fetal Medicine Database. POPULATION: Pregnant women in Denmark undergoing first-trimester screening for trisomy 21. METHODS: Data on maternal characteristics, biochemical and ultrasonic markers are continuously sent electronically from local fetal medicine databases (Astraia Gmbh......%. The national screen-positive rate increased from 3.6% in 2008 to 4.7% in 2012. The national detection rate of trisomy 21 was reported to be between 82 and 90% in the 5-year period. CONCLUSION: A national fetal medicine database has been successfully established in Denmark. Results from the database have shown...

  9. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer....... The thesis discusses the VR-tree, an extension of the R-tree that enables observer relative data extraction. To support incremental observer position relative data extraction the thesis proposes the Volatile Access Structure (VAST). VAST is a main memory structure that caches nodes of the VR-tree. VAST...... and visualization systems. The thesis describes other techniques that extend the functionality of an observer aware database to support the extraction of the N most visible objects. This functionality is particularly useful if the number of newly visible objects is still too large. The thesis investigates how...

  10. The experimental database

    International Nuclear Information System (INIS)

    A large number of new standards measurements have been carried out since the completion of the ENDF/B-VI standards evaluation. Furthermore, some measurements used in that evaluation have undergone changes that also need to be incorporated into the new evaluation of the standards. Measurements now exist for certain standards up to 200 MeV. These measurements, as well as those used in the ENDF/B-VI evaluation of the standards, have been included in the database for the new international evaluation of the neutron cross-section standards. Many of the experiments agree well with the ENDF/B-VI evaluations. However, some problems have been observed: There was conflict with the H(n,n) differential cross-section around 14 MeV and at about 190 MeV. New measurements of the 10B branching ratio suggested a problem, although additional experimental work indicated that the ENDF/B-VI values are generally reasonable. Differences were observed for the 10B total cross-section and the 10B(n,α1γ) cross-section. Except for possible differences near 270 keV, the 197Au(n,g) cross-section measurements are generally in agreement with the ENDF/B-VI evaluation. New measurements of the 235U(n,f) cross section indicate higher values above 15 MeV. There is concern with some new absolute 238U(n,f) cross-section measurements since they indicate larger values than supportive 238U(n,f)/235U(n,f) cross-section ratio measurements in the 5-20 MeV energy region. At very high energies there are significant differences in the 238U(n,f)/235U(n,f) cross section ratio - the maximum difference exceeds 5% at 200 MeV

  11. Technical Potential for Photovoltaics on Buildings in the EU-27

    NARCIS (Netherlands)

    Defaix, P.R.; van Sark, W.G.J.H.M.; Worrell, E.; de Visser, Erika

    2012-01-01

    Accurate knowledge on the technical potential for Building Integrated PhotoVoltaics (BIPV) in the various member states of the European Union is unavailable. To estimate the potential for BIPV we developed a method using readily available statistical data on buildings from European databases. Based

  12. The case for stronger regulation of private practitioners to control tuberculosis in low- and middle-income countries.

    Science.gov (United States)

    Mahendradhata, Yodi

    2015-01-01

    Tuberculosis case management practices of private practitioners in low- and middle-income countries are commonly not in compliance with treatment guidelines, thus increasing the risk of drug resistance. National Tuberculosis control programs have long been encouraged to collaborate with private providers to improve compliance, but there is no example yet of a sustained, large scale collaborations with private practitioners in these settings. Regulations have long been realized as a potential response to poor quality care, however there has been a lack of interest from the international actors to invest in stronger regulation of private providers in these countries due to limited evidence and many implementation challenges. Regulatory strategies have now evolved beyond the costly conventional form of command and control. These new strategies need to be tested for addressing the challenge of poor quality care among private providers. Multilateral and bilateral funding agencies committed to tuberculosis control need to invest in facilitating strengthening government's capacity to effectively regulate private providers. PMID:26499482

  13. Strategic Factors Influencing National and Regional Systems of Innovation: A Case of Weaker NSI with Stronger RSI

    Directory of Open Access Journals (Sweden)

    Pir Roshanuddin Shah Rashdi

    2015-04-01

    Full Text Available The issues of relationship between NSI ((National System of Innovation and RSI (Regional System of Innovation are not well reported with innovation policy research. That is, whether the NSI is the system on top of RSI, or the importance of regions make stronger NSIs. Therefore, it raises concern regarding development of strategic relationship between these two. For this, two cases ? Catalonia (Spain and N Ireland (the UK, have been selected based on theoretical sampling. Key economic indicators have been identified and have been quantitatively analyzed. The evidence suggests that strong NSI has positive influence on RSI. In addition to that, the concentration of knowledge and promotion of institutions may be strategically established and then needed resources may be injected to produce high quality human resources. There is, however, need for more comprehensive studies to be conducted in order to validate the results of this research

  14. Emotional reactions to standardized stimuli in women with borderline personality disorder: stronger negative affect, but no differences in reactivity.

    Science.gov (United States)

    Jacob, Gitta A; Hellstern, Kathrin; Ower, Nicole; Pillmann, Mona; Scheel, Corinna N; Rüsch, Nicolas; Lieb, Klaus

    2009-11-01

    Emotional dysregulation is hypothesized to be a core feature of borderline personality disorder (BPD). In this study, we investigated the course of emotions in response to standardized emotion inductions in BPD. A total of 26 female BPD patients, 28 matched healthy control subjects, and 15 female patients with major depressive disorder listened to short stories inducing an angry, joyful, or neutral mood. Before and immediately after each story as well as 3 and 6 minutes later, participants rated their current anger, joy, anxiety, shame, and sadness. All 3 groups showed the same increase and decrease of emotions. However, strong group differences in the general level of all negative emotions occurred. While sadness was stronger both in BPD and major depressive disorder as compared with healthy controls, all other negative emotions were significantly increased in BPD only independent of comorbid depression. Extreme negative affectivity may be a more appropriate description of BPD-related emotional problems than emotional hyperreactivity. PMID:19996718

  15. Database on wind characteristics. Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    The main objective of IEA R&D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured windfield time series observed in a wide range...... for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i) high sampled wind...

  16. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available KOME Database Description General information of database Database name Knowledge-based Oryza Molecular biological...baraki 305-8602, Japan National Institute of Agrobiological Sciences Plant Genome Research Unit Shoshi Kikuc...ca rice Author name(s): Rice Full-Length cDNA Consortium; National Institute of Agrobiological Sciences Rice...base maintenance site National Institute of Agrobiological Sciences URL of the original website http://cdna0... Encyclopedia Alternative name KOME Creator Creator Name: Shoshi Kikuchi Creator Affiliation: National Institute of Agrobiologi

  17. Detailed weather data generator for building simulations

    CERN Document Server

    Adelard, L; Garde, F; Gatina, J -C

    2012-01-01

    Thermal buildings simulation softwares need meteorological files in thermal comfort, energetic evaluation studies. Few tools can make significant meteorological data available such as generated typical year, representative days, or artificial meteorological database. This paper deals about the presentation of a new software, RUNEOLE, used to provide weather data in buildings applications with a method adapted to all kind of climates. RUNEOLE associates three modules of description, modelling and generation of weather data. The statistical description of an existing meteorological database makes typical representative days available and leads to the creation of model libraries. The generation module leads to the generation of non existing sequences. This software tends to be usable for the searchers and designers, by means of interactivity, facilitated use and easy communication. The conceptual basis of this tool will be exposed and we'll propose two examples of applications in building physics for tropical hu...

  18. Building renovation with interior insulation on solid masonry walls in Denmark - A study of the building segment and possible solutions

    DEFF Research Database (Denmark)

    Odgaard, Tommy; Bjarløv, Søren Peter; Rode, Carsten;

    2015-01-01

    The segment size of the Danish multi-story building stock from the period 1851-1930 is established through a unique major database managed by the Danish authorities. The outcome illustrates a large segment with 219,202 apartment units distributed over 14,832 unique buildings, all sharing characte...

  19. Update History of This Database - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Open TG-GATEs Pathological Image Database Update History of This Database Date Update contents 2012/05/24 Op...tio About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update

  20. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  1. Cloudsat tropical cyclone database

    Science.gov (United States)

    Tourville, Natalie D.

    CloudSat (CS), the first 94 GHz spaceborne cloud profiling radar (CPR), launched in 2006 to study the vertical distribution of clouds. Not only are CS observations revealing inner vertical cloud details of water and ice globally but CS overpasses of tropical cyclones (TC's) are providing a new and exciting opportunity to study the vertical structure of these storm systems. CS TC observations are providing first time vertical views of TC's and demonstrate a unique way to observe TC structure remotely from space. Since December 2009, CS has intersected every globally named TC (within 1000 km of storm center) for a total of 5,278 unique overpasses of tropical systems (disturbance, tropical depression, tropical storm and hurricane/typhoon/cyclone (HTC)). In conjunction with the Naval Research Laboratory (NRL), each CS TC overpass is processed into a data file containing observational data from the afternoon constellation of satellites (A-TRAIN), Navy's Operational Global Atmospheric Prediction System Model (NOGAPS), European Center for Medium range Weather Forecasting (ECMWF) model and best track storm data. This study will describe the components and statistics of the CS TC database, present case studies of CS TC overpasses with complementary A-TRAIN observations and compare average reflectivity stratifications of TC's across different atmospheric regimes (wind shear, SST, latitude, maximum wind speed and basin). Average reflectivity stratifications reveal that characteristics in each basin vary from year to year and are dependent upon eye overpasses of HTC strength storms and ENSO phase. West Pacific (WPAC) basin storms are generally larger in size (horizontally and vertically) and have greater values of reflectivity at a predefined height than all other basins. Storm structure at higher latitudes expands horizontally. Higher vertical wind shear (≥ 9.5 m/s) reduces cloud top height (CTH) and the intensity of precipitation cores, especially in HTC strength storms

  2. The Geophysical Database Management System in Taiwan

    Directory of Open Access Journals (Sweden)

    Tzay-Chyn Shin

    2013-01-01

    Full Text Available The Geophysical Database Management System (GDMS is an integrated and web-based open data service which has been developed by the Central Weather Bureau (CWB, Taiwan, ROC since 2005. This service went online on August 1, 2008. The GDMS provides six types of geophysical data acquired from the Short-period Seismographic System, Broadband Seismographic System, Free-field Strong-motion Station, Strong-motion Building Array, Global Positioning System, and Groundwater Observation System. When utilizing the GDMS website, users can download seismic event data and continuous geophysical data. At present, many researchers have accessed this public platform to obtain geophysical data. Clearly, the establishment of GDMS is a significant improvement in data sorting for interested researchers.

  3. User Guidelines for the Brassica Database: BRAD.

    Science.gov (United States)

    Wang, Xiaobo; Cheng, Feng; Wang, Xiaowu

    2016-01-01

    The genome sequence of Brassica rapa was first released in 2011. Since then, further Brassica genomes have been sequenced or are undergoing sequencing. It is therefore necessary to develop tools that help users to mine information from genomic data efficiently. This will greatly aid scientific exploration and breeding application, especially for those with low levels of bioinformatic training. Therefore, the Brassica database (BRAD) was built to collect, integrate, illustrate, and visualize Brassica genomic datasets. BRAD provides useful searching and data mining tools, and facilitates the search of gene annotation datasets, syntenic or non-syntenic orthologs, and flanking regions of functional genomic elements. It also includes genome-analysis tools such as BLAST and GBrowse. One of the important aims of BRAD is to build a bridge between Brassica crop genomes with the genome of the model species Arabidopsis thaliana, thus transferring the bulk of A. thaliana gene study information for use with newly sequenced Brassica crops. PMID:26519408

  4. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  5. Database Systems - Present and Future

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  6. Unifying Memory and Database Transactions

    Science.gov (United States)

    Dias, Ricardo J.; Lourenço, João M.

    Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

  7. The YH database: the first Asian diploid genome database.

    Science.gov (United States)

    Li, Guoqing; Ma, Lijia; Song, Chao; Yang, Zhentao; Wang, Xiulan; Huang, Hui; Li, Yingrui; Li, Ruiqiang; Zhang, Xiuqing; Yang, Huanming; Wang, Jian; Wang, Jun

    2009-01-01

    The YH database is a server that allows the user to easily browse and download data from the first Asian diploid genome. The aim of this platform is to facilitate the study of this Asian genome and to enable improved organization and presentation large-scale personal genome data. Powered by GBrowse, we illustrate here the genome sequences, SNPs, and sequencing reads in the MapView. The relationships between phenotype and genotype can be searched by location, dbSNP ID, HGMD ID, gene symbol and disease name. A BLAST web service is also provided for the purpose of aligning query sequence against YH genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn.

  8. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  9. Databases of the marine metagenomics.

    Science.gov (United States)

    Mineta, Katsuhiko; Gojobori, Takashi

    2016-02-01

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  10. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  11. Practical database programming with Java

    CERN Document Server

    Bai, Ying

    2011-01-01

    "This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--

  12. Database characterisation of HEP applications

    Science.gov (United States)

    Piorkowski, Mariusz; Grancher, Eric; Topurov, Anton

    2012-12-01

    Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to the overall performance, the predictability of the response is a key requirement to ensure smooth operations and delivering predictability requires understanding the applications from the ground up. Fortunately, database management systems provide several tools to check, measure, analyse and gather useful information. We present our experiences characterising the performance of several typical HEP database applications performance characterisations that were used to deliver improved predictability and scalability as well as for optimising the hardware platform choice as we migrated to new hardware and Oracle 11g.

  13. Database characterisation of HEP applications

    International Nuclear Information System (INIS)

    Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to the overall performance, the predictability of the response is a key requirement to ensure smooth operations and delivering predictability requires understanding the applications from the ground up. Fortunately, database management systems provide several tools to check, measure, analyse and gather useful information. We present our experiences characterising the performance of several typical HEP database applications performance characterisations that were used to deliver improved predictability and scalability as well as for optimising the hardware platform choice as we migrated to new hardware and Oracle 11g.

  14. OECD/NEA thermochemical database

    International Nuclear Information System (INIS)

    This state of the art report is to introduce the contents of the Chemical Data-Service, OECD/NEA, and the results of survey by OECD/NEA for the thermodynamic and kinetic database currently in use. It is also to summarize the results of Thermochemical Database Projects of OECD/NEA. This report will be a guide book for the researchers easily to get the validate thermodynamic and kinetic data of all substances from the available OECD/NEA database. (author). 75 refs

  15. Biological Databases for Human Research

    Institute of Scientific and Technical Information of China (English)

    Dong Zou; Lina Ma; Jun Yu; Zhang Zhang

    2015-01-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation.

  16. Intelligent high-speed cutting database system development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,the components of a high-speed cutting system are analyzed firstly.The component variables of the high-speed cutting system are classified into four types:uncontrolled variables,process variables,control variables,and output variables.The relationships and interactions of these variables are discussed.Then,by analyzing and comparing intelligent reasoning methods frequently used,the hybrid reasoning is employed to build the high-speed cutting database system.Then,the data structures of high-speed cutting case base and databases are determined.Finally,the component parts and working process of the high-speed cutting database system on the basis of hybrid reasoning are presented.

  17. Space Launch System Ascent Static Aerodynamic Database Development

    Science.gov (United States)

    Pinier, Jeremy T.; Bennett, David W.; Blevins, John A.; Erickson, Gary E.; Favaregh, Noah M.; Houlden, Heather P.; Tomek, William G.

    2014-01-01

    This paper describes the wind tunnel testing work and data analysis required to characterize the static aerodynamic environment of NASA's Space Launch System (SLS) ascent portion of flight. Scaled models of the SLS have been tested in transonic and supersonic wind tunnels to gather the high fidelity data that is used to build aerodynamic databases. A detailed description of the wind tunnel test that was conducted to produce the latest version of the database is presented, and a representative set of aerodynamic data is shown. The wind tunnel data quality remains very high, however some concerns with wall interference effects through transonic Mach numbers are also discussed. Post-processing and analysis of the wind tunnel dataset are crucial for the development of a formal ascent aerodynamics database.

  18. Event-based incremental updating of spatio-temporal database

    Institute of Scientific and Technical Information of China (English)

    周晓光; 陈军; 蒋捷; 朱建军; 李志林

    2004-01-01

    Based on the relationship among the geographic events, spatial changes and the database operations, a new automatic (semi-automatic) incremental updating approach of spatio-temporal database (STDB) named as event-based incremental updating (E-BIU) is proposed in this paper. At first, the relationship among the events, spatial changes and the database operations is analyzed, then a total architecture of E-BIU implementation is designed, which includes an event queue, three managers and two sets of rules, each component is presented in detail. The process of the E-BIU of master STDB is described successively. An example of building's incremental updating is given to illustrate this approach at the end. The result shows that E-BIU is an efficient automatic updating approach for master STDB.

  19. The IACOB spectroscopic database: recent updates and first data release

    CERN Document Server

    Simón-Díaz, S; Apellániz, J Maíz; Castro, N; Herrero, A; Garcia, M; Pérez-Prieto, J A; Caon, N; Alacid, J M; Camacho, I; Dorda, R; Godart, M; González-Fernández, C; Holgado, G; Rübke, K

    2015-01-01

    The IACOB project is an ambitious long-term project which is contributing to step forward in our knowledge about the physical properties and evolution of Galactic massive stars. The project aims at building a large database of high-resolution, multi-epoch, spectra of Galactic OB stars, and the scientific exploitation of the database using state-of-the-art models and techniques. In this proceeding, we summarize the latest updates of the IACOB spectroscopic database and highlight some of the first scientific results from the IACOB project; we also announce the first data release and the first public version of the iacob-broad tool for the line-broadening characterization of OB-type spectra.

  20. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.