WorldWideScience

Sample records for advanced web-accessible database

  1. Village Green Project: Web-accessible Database

    Science.gov (United States)

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  2. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  3. Web-Accessible Database of hsp65 Sequences from Mycobacterium Reference Strains▿†

    OpenAIRE

    Dai, Jianli; Chen, Yuansha; Lauzardo, Michael

    2011-01-01

    Mycobacteria include a large number of pathogens. Identification to species level is important for diagnoses and treatments. Here, we report the development of a Web-accessible database of the hsp65 locus sequences (http://msis.mycobacteria.info) from 149 out of 150 Mycobacterium species/subspecies. This database can serve as a reference for identifying Mycobacterium species.

  4. Federated web-accessible clinical data management within an extensible neuroimaging database.

    Science.gov (United States)

    Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S

    2010-12-01

    Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938

  5. The bovine QTL viewer: a web accessible database of bovine Quantitative Trait Loci

    Directory of Open Access Journals (Sweden)

    Xavier Suresh R

    2006-06-01

    Full Text Available Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these data available in a genomic context to researchers via a user friendly query interface. Description The QTL (Quantitative Trait Locus data and related information for bovine QTL are gathered from published work and from existing databases. An integrated database schema was designed and the database (MySQL populated with the gathered data. The bovine QTL Viewer was developed for the integration of QTL data available for cattle. The tool consists of an integrated database of bovine QTL and the QTL viewer to display QTL and their chromosomal position. Conclusion We present a web accessible, integrated database of bovine (dairy and beef cattle QTL for use by animal geneticists. The viewer and database are of general applicability to any livestock species for which there are public QTL data. The viewer can be accessed at http://bovineqtl.tamu.edu.

  6. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  7. The bovine QTL viewer: a web accessible database of bovine Quantitative Trait Loci

    OpenAIRE

    Xavier Suresh R; Aragonda Prathyusha; Polineni Pavana; Furuta Richard; Adelson David L

    2006-01-01

    Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling) in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these ...

  8. Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database

    OpenAIRE

    Ozyurt, I. Burak; Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.

    2010-01-01

    Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging s...

  9. Evaluating Web Accessibility Metrics for Jordanian Universities

    Directory of Open Access Journals (Sweden)

    Israa Wahbi Kamal

    2016-07-01

    Full Text Available University web portals are considered one of the main access gateways for universities. Typically, they have a large candidate audience among the current students, employees, and faculty members aside from previous and future students, employees, and faculty members. Web accessibility is the concept of providing web content universal access to different machines and people with different ages, skills, education levels, and abilities. Several web accessibility metrics have been proposed in previous years to measure web accessibility. We integrated and extracted common web accessibility metrics from the different accessibility tools used in this study. This study evaluates web accessibility metrics for 36 Jordanian universities and educational institute websites. We analyze the level of web accessibility using a number of available evaluation tools against the standard guidelines for web accessibility. Receiver operating characteristic quality measurements is used to evaluate the effectiveness of the integrated accessibility metrics.

  10. Binary Coded Web Access Pattern Tree in Education Domain

    Science.gov (United States)

    Gomathi, C.; Moorthi, M.; Duraiswamy, K.

    2008-01-01

    Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…

  11. Improving Web Accessibility in a University Setting

    Science.gov (United States)

    Olive, Geoffrey C.

    2010-01-01

    Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…

  12. Advances in knowledge discovery in databases

    CERN Document Server

    Adhikari, Animesh

    2015-01-01

    This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.  

  13. Performance Enhancements for Advanced Database Management Systems

    OpenAIRE

    Helmer, Sven

    2000-01-01

    New applications have emerged, demanding database management systems with enhanced functionality. However, high performance is a necessary precondition for the acceptance of such systems by end users. In this context we developed, implemented, and tested algorithms and index structures for improving the performance of advanced database management systems. We focused on index structures and join algorithms for set-valued attributes.

  14. Clinical Genomic Database

    OpenAIRE

    Solomon, Benjamin D.; Nguyen, Anh-Dao; Bear, Kelly A.; Wolfsberg, Tyra G.

    2013-01-01

    Technological advances have greatly increased the availability of human genomic sequencing. However, the capacity to analyze genomic data in a clinically meaningful way lags behind the ability to generate such data. To help address this obstacle, we reviewed all conditions with genetic causes and constructed the Clinical Genomic Database (CGD) (http://research.nhgri.nih.gov/CGD/), a searchable, freely Web-accessible database of conditions based on the clinical utility of genetic diagnosis and...

  15. Web-accessible cervigram automatic segmentation tool

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  16. AtlasDcsWebViewer: a web access to the Atlas DCS data An introduction manual to the AtlasDcsWebViewer, a web-based tool to query the PVSS Oracle database

    CERN Document Server

    Bitenc, U; The ATLAS collaboration; Ferrari, P; Luisa, L

    2009-01-01

    This note describes how to access the DCS data from a web browser. The DCS data from the various Atlas subdetectors are stored to the Oracle database using the RDB manager in PVSS. All subdetectors use either the same, or a very similar schema. The effort coordinated within the Inner Detector has produced a web-based tool to search the Atlas DCS Oracle database and to display the results. This tool has been easily extended to access the data from other Atlas subdetectors. In this note we describe the structure of the AtlasDcsWebViewer and its use.

  17. Nuclear integrated database and design advancement system

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.

  18. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  19. Advanced information technology: Building stronger databases

    Energy Technology Data Exchange (ETDEWEB)

    Price, D. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    This paper discusses the attributes of the Advanced Information Technology (AIT) tool set, a database application builder designed at the Lawrence Livermore National Laboratory. AIT consists of a C library and several utilities that provide referential integrity across a database, interactive menu and field level help, and a code generator for building tightly controlled data entry support. AIT also provides for dynamic menu trees, report generation support, and creation of user groups. Composition of the library and utilities is discussed, along with relative strengths and weaknesses. In addition, an instantiation of the AIT tool set is presented using a specific application. Conclusions about the future and value of the tool set are then drawn based on the use of the tool set with that specific application.

  20. Current state of web accessibility of Malaysian ministries websites

    Science.gov (United States)

    Ahmi, Aidi; Mohamad, Rosli

    2016-08-01

    Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.

  1. Construction of databases: advances and significance in clinical research.

    Science.gov (United States)

    Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian

    2015-12-01

    Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research. PMID:27215009

  2. Web Accessibility Theory and Practice: An Introduction for University Faculty

    Science.gov (United States)

    Bradbard, David A.; Peters, Cara

    2010-01-01

    Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…

  3. Web Accessibility Policies at Land-Grant Universities

    Science.gov (United States)

    Bradbard, David A.; Peters, Cara; Caneva, Yoana

    2010-01-01

    The Web has become an integral part of postsecondary education within the United States. There are specific laws that legally mandate postsecondary institutions to have Web sites that are accessible for students with disabilities (e.g., the Americans with Disabilities Act (ADA)). Web accessibility policies are a way for universities to provide a…

  4. Web accessibility practical advice for the library and information professional

    CERN Document Server

    Craven, Jenny

    2008-01-01

    Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.

  5. User Experience-UX-and the Web Accessibility Standards

    Directory of Open Access Journals (Sweden)

    Osama Sohaib

    2011-05-01

    Full Text Available The success of web-based applications depends on how well it is perceive by the end-users. The various web accessibility guidelines have promoted to help improve accessing, understanding the content of web pages. Designing for the total User Experience (UX is an evolving discipline of the World Wide Web mainstream that focuses on how the end users will work to achieve their target goals. To satisfy end-users, web-based applications must fulfill some common needs like clarity, accessibility and availability. The aim of this study is to evaluate how the User Experience characteristics of web-based application are related to web accessibility guidelines (WCAG 2.0, ISO 9241:151 and Section 508.

  6. A Study on Web Accessibility Improvement Using QR-Code

    OpenAIRE

    Dae-Jea Cho

    2016-01-01

    Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-...

  7. A Study on Web Accessibility Improvement Using QR-Code

    Directory of Open Access Journals (Sweden)

    Dae-Jea Cho

    2016-07-01

    Full Text Available Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-Code. When producing web page adapting the method provided in this paper, it will help the visually impaired to understand the contents of webpage.

  8. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.;

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  9. Advanced approaches to intelligent information and database systems

    CERN Document Server

    Boonjing, Veera; Chittayasothorn, Suphamit

    2014-01-01

    This book consists of 35 chapters presenting different theoretical and practical aspects of Intelligent Information and Database Systems. Nowadays both Intelligent and Database Systems are applied in most of the areas of human activities which necessitates further research in these areas. In this book various interesting issues related to the intelligent information models and methods as well as their advanced applications, database systems applications, data models and their analysis, and digital multimedia methods and applications are presented and discussed both from the practical and theoretical points of view. The book is organized in four parts devoted to intelligent systems models and methods, intelligent systems advanced applications, database systems methods and applications, and multimedia systems methods and applications. The book will be interesting for both practitioners and researchers, especially graduate and PhD students of information technology and computer science, as well more experienced ...

  10. Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database

    Science.gov (United States)

    Levack, Daniel

    1993-01-01

    The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.

  11. Database requirements for the Advanced Test Accelerator project

    International Nuclear Information System (INIS)

    The database requirements for the Advanced Test Accelerator (ATA) project are outlined. ATA is a state-of-the-art electron accelerator capable of producing energetic (50 million electron volt), high current (10,000 ampere), short pulse (70 billionths of a second) beams of electrons for a wide variety of applications. Databasing is required for two applications. First, the description of the configuration of facility itself requires an extended database. Second, experimental data gathered from the facility must be organized and managed to insure its full utilization. The two applications are intimately related since the acquisition and analysis of experimental data requires knowledge of the system configuration. This report reviews the needs of the ATA program and current implementation, intentions, and desires. These database applications have several unique aspects which are of interest and will be highlighted. The features desired in an ultimate database system are outlined. 3 references, 5 figures

  12. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    Science.gov (United States)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the

  13. Postsecondary Web Accessibility for Students with Disabilities: A Collective Case Study

    Science.gov (United States)

    Forgione-Barkas, Elizabeth

    2012-01-01

    This collective case study reviewed the current state of Web accessibility at 102 postsecondary colleges and universities in North Carolina. The study examined themes within Web-accessibility compliance and identified which disability subgroups were most and least affected, why the common errors were occurring, and how the errors could be fixed.…

  14. Learning Task Knowledge from Dialog and Web Access

    Directory of Open Access Journals (Sweden)

    Vittorio Perera

    2015-06-01

    Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.

  15. Hera-FFX: a Firefox add-on for Semi-automatic Web Accessibility Evaluation

    OpenAIRE

    Fuertes Castro, José Luis; González, Ricardo; Gutiérrez, Emmanuelle; Martínez Normand, Loïc

    2009-01-01

    Website accessibility evaluation is a complex task requiring a combination of human expertise and software support. There are several online and offline tools to support the manual web accessibility evaluation process. However, they all have some weaknesses because none of them includes all the desired features. In this paper we present Hera-FFX, an add-on for the Firefox web browser that supports semi-automatic web accessibility evaluation.

  16. Advancements in web-database applications for rabies surveillance

    Directory of Open Access Journals (Sweden)

    Bélanger Denise

    2011-08-01

    Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from

  17. The Saccharomyces Genome Database: Advanced Searching Methods and Data Mining.

    Science.gov (United States)

    Cherry, J Michael

    2015-12-01

    At the core of the Saccharomyces Genome Database (SGD) are chromosomal features that encode a product. These include protein-coding genes and major noncoding RNA genes, such as tRNA and rRNA genes. The basic entry point into SGD is a gene or open-reading frame name that leads directly to the locus summary information page. A keyword describing function, phenotype, selective condition, or text from abstracts will also provide a door into the SGD. A DNA or protein sequence can be used to identify a gene or a chromosomal region using BLAST. Protein and DNA sequence identifiers, PubMed and NCBI IDs, author names, and function terms are also valid entry points. The information in SGD has been gathered and is maintained by a group of scientific biocurators and software developers who are devoted to providing researchers with up-to-date information from the published literature, connections to all the major research resources, and tools that allow the data to be explored. All the collected information cannot be represented or summarized for every possible question; therefore, it is necessary to be able to search the structured data in the database. This protocol describes the YeastMine tool, which provides an advanced search capability via an interactive tool. The SGD also archives results from microarray expression experiments, and a strategy designed to explore these data using the SPELL (Serial Pattern of Expression Levels Locator) tool is provided. PMID:26631124

  18. A manufacturing database of advanced materials used in spacecraft structures

    Science.gov (United States)

    Bao, Han P.

    1994-01-01

    Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer

  19. 网络无障碍的发展:政策、理论和方法%Development of Web Accessibility: Policies, Theories and Apporoaches

    Institute of Scientific and Technical Information of China (English)

    Xiaoming Zeng

    2006-01-01

    The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.

  20. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika;

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...

  1. Advanced type placement and geonames database: comprehensive coordination plan

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, A.E.; Gough, E.C.; Brown, R.M.; Zied, A.

    1983-01-01

    This paper presents a preliminary comprehensive coordination plan for the technical development and integration of an automated names database capture and management of an overall automated cartographic system, for the Defense Mapping Agency. It broadly covers the technical issues associated with system and associated subsystems functional requirements, inter-subsystem interaction common technologies in hardware, software, database, and artificial intelligence. The 3-phase RandD cycles for each and all subsystems are also outlined.

  2. 18th East European Conference on Advances in Databases and Information Systems and Associated Satellite Events

    CERN Document Server

    Ivanovic, Mirjana; Kon-Popovska, Margita; Manolopoulos, Yannis; Palpanas, Themis; Trajcevski, Goce; Vakali, Athena

    2015-01-01

    This volume contains the papers of 3 workshops and the doctoral consortium, which are organized in the framework of the 18th East-European Conference on Advances in Databases and Information Systems (ADBIS’2014). The 3rd International Workshop on GPUs in Databases (GID’2014) is devoted to subjects related to utilization of Graphics Processing Units in database environments. The use of GPUs in databases has not yet received enough attention from the database community. The intention of the GID workshop is to provide a discussion on popularizing the GPUs and providing a forum for discussion with respect to the GID’s research ideas and their potential to achieve high speedups in many database applications. The 3rd International Workshop on Ontologies Meet Advanced Information Systems (OAIS’2014) has a twofold objective to present: new and challenging issues in the contribution of ontologies for designing high quality information systems, and new research and technological developments which use ontologie...

  3. Combining Social Networks and Semantic Web Technologies for Personalizing Web Access

    Science.gov (United States)

    Carminati, Barbara; Ferrari, Elena; Perego, Andrea

    The original purpose of Web metadata was to protect end-users from possible harmful content and to simplify search and retrieval. However they can also be also exploited in more enhanced applications, such as Web access personalization on the basis of end-users’ preferences. In order to achieve this, it is however necessary to address several issues. One of the most relevant is how to assess the trustworthiness of Web metadata. In this paper, we discuss how such issue can be addressed through the use of collaborative and Semantic Web technologies. The system we propose is based on a Web-based Social Network, where members are able not only to specify labels, but also to rate existing labels. Both labels and ratings are then used to assess the trustworthiness of resources’ descriptions and to enforce Web access personalization.

  4. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee

    2008-09-15

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4.

  5. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    International Nuclear Information System (INIS)

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4

  6. The Advanced REACH Tool (ART) : Incorporation of an Exposure Measurement Database

    NARCIS (Netherlands)

    Schinkel, J.; Richie, P.; Goede, H.; Fransman, W.; Tongeren, M. van; Cherrie, J.W.; Tielemans, E.; Kromhout, H.; Warren, N.

    2013-01-01

    This article describes the structure, functionalities, and content of the Advanced REACH Tool (ART) exposure database (version 1.5). The incorporation of the exposure database into ART allows users who do not have their own measurement data for their exposure scenario, to update the exposure estimat

  7. Assessment the web accessibility of e-shops of selected Polish e-commerce companies

    OpenAIRE

    Anna Michalczyk

    2015-01-01

    The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.

  8. DB2 9 for Linux, UNIX, and Windows Advanced Database Administration Certification Certification Study Guide

    CERN Document Server

    Sanders, Roger E

    2008-01-01

    Database administrators versed in DB2 wanting to learn more about advanced database administration activities and students wishing to gain knowledge to help them pass the DB2 9 UDB Advanced DBA certification exam will find this exhaustive reference invaluable. Written by two individuals who were part of the team that developed the certification exam, this comprehensive study guide prepares the student for challenging questions on database design; data partitioning and clustering; high availability diagnostics; performance and scalability; security and encryption; connectivity and networking; a

  9. GENERATION OF AN ADVANCED HELICOPTER EXPERIMENTAL AERODYNAMIC DATABASE

    OpenAIRE

    Raffel, Markus; de Gegorio, Fabrizio; Sheng, W; GIBERTINI G.; Seraudie, A.; Groot, Klaus de; van der Wall, Berend G.

    2009-01-01

    The GOAHEAD-consortium was created in the frame of an EU-project in order to create an experimental database for the validation of 3D-CFD and comprehensive aeromechanics methods for the prediction of unsteady viscous flows including rotor dynamics for complete helicopter configurations, i.e. main rotor – fuselage – tail rotor configurations with emphasis on viscous phenomena like flow separation and transition from laminar to turbulent flow. The wind tunnel experiments have been p...

  10. Advancements in web-database applications for rabies surveillance

    OpenAIRE

    Bélanger Denise; Coté Nathalie; Gendron Bruno; Lelièvre Frédérick; Rees Erin E

    2011-01-01

    Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among pa...

  11. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    OpenAIRE

    Wen-Jye Shyr; Te-Jen Su; Chia-Ming Lin

    2013-01-01

    This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC) and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equ...

  12. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents...

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. Assessment the web accessibility of e-shops of selected Polish e-commerce companies

    Directory of Open Access Journals (Sweden)

    Anna Michalczyk

    2015-11-01

    Full Text Available The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.

  15. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  16. Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges

    Science.gov (United States)

    Penuel, William R.; Means, Barbara

    2011-01-01

    Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…

  17. The development of technical database of advanced spent fuel management process

    International Nuclear Information System (INIS)

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig

  18. 17th East European Conference on Advances in Databases and Information Systems and Associated Satellite Events

    CERN Document Server

    Cerquitelli, Tania; Chiusano, Silvia; Guerrini, Giovanna; Kämpf, Mirko; Kemper, Alfons; Novikov, Boris; Palpanas, Themis; Pokorný, Jaroslav; Vakali, Athena

    2014-01-01

    This book reports on state-of-art research and applications in the field of databases and information systems. It includes both fourteen selected short contributions, presented at the East-European Conference on Advances in Databases and Information Systems (ADBIS 2013, September 1-4, Genova, Italy), and twenty-six papers from ADBIS 2013 satellite events. The short contributions from the main conference are collected in the first part of the book, which covers a wide range of topics, like data management, similarity searches, spatio-temporal and social network data, data mining, data warehousing, and data management on novel architectures, such as graphics processing units, parallel database management systems, cloud and MapReduce environments. In contrast, the contributions from the satellite events are organized in five different parts, according to their respective ADBIS satellite event: BiDaTA 2013 - Special Session on Big Data: New Trends and Applications); GID 2013 – The Second International Workshop ...

  19. Object-Oriented Database Model For Effective Mining Of Advanced Engineering Materials Data Sets

    OpenAIRE

    Doreswamy; Manohar M G; Hemanth K S

    2012-01-01

    Materials have become a very important aspect of our daily life and the search for better and new kind of engineered materials has created some opportunities for the Information science and technology fraternity to investigate in to the world of materials. Hence this combination of materials science and Information science together is nowadays known as Materials Informatics. An Object-Oriented Database Model has been proposed for organizing advanced engineering materials datasets.

  20. Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.

    Science.gov (United States)

    Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika

    2016-01-01

    Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia. PMID:27534340

  1. Development of 'Data-Free-Way' distributed database system for advanced nuclear materials

    International Nuclear Information System (INIS)

    A distributed database system named 'Data-Free-Way (DFW)' is under development by a cooperation among three Japanese national research organizations to support the creation of advanced nuclear materials. The development of DFW started in 1990 as a five-year program with a support from Science and Technology Agency of Japan. Before starting the program, a preliminary survey of both domestic and foreign databases of nuclear materials had been made for two years. Then, subjects for the construction of DFW were extracted. To meet the subjects, the development and construction programs were established. The DFW is constructed on the computer network which connects engineering work-stations in the separate organizations. A relational database management system is used, a distributed material database is equipped on the hardware with specially designed common data structure. Data storage has been carried out continuously in each organization. The equipment of useful user-interface systems, such as retrieval, data entry and process supporting and image data handling systems, have also been constructed to make it friendly for users. The collection of nuclear material data from three research organizations and its mutual usage have become possible by the construction of DFW. (author)

  2. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  3. Molecular Identification and Databases in Fusarium

    Science.gov (United States)

    DNA sequence-based methods for identifying pathogenic and mycotoxigenic Fusarium isolates have become the gold standard worldwide. Moreover, fusarial DNA sequence data are increasing rapidly in several web-accessible databases for comparative purposes. Unfortunately, the use of Basic Alignment Sea...

  4. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Wojciechowski, Marek; New Trends in Databases and Information Systems

    2013-01-01

    Database and information systems technologies have been rapidly evolving in several directions over the past years. New types and kinds of data, new types of applications and information systems to support them raise diverse challenges to be addressed. The so-called big data challenge, streaming data management and processing, social networks and other complex data analysis, including semantic reasoning into information systems supporting for instance trading, negotiations, and bidding mechanisms are just some of the emerging research topics. This volume contains papers contributed by six workshops: ADBIS Workshop on GPUs in Databases (GID 2012), Mining Complex and Stream Data (MCSD'12), International Workshop on Ontologies meet Advanced Information Systems (OAIS'2012), Second Workshop on Modeling Multi-commodity Trade: Data models and processing (MMT'12), 1st ADBIS Workshop on Social Data Processing (SDP'12), 1st ADBIS Workshop on Social and Algorithmic Issues in Business Support (SAIBS), and the Ph.D. Conso...

  5. Alarm Reduction Processing of Advanced Nuclear Power Plant Using Data Mining and Active Database Technologies

    International Nuclear Information System (INIS)

    The purpose of the Advanced Alarm Processing (AAP) is to extract only the most important and the most relevant data out of large amount of available information. It should be noted that the integrity of the knowledge base is the most critical in developing a reliable AAP. This paper proposes a new approach to an AAP by using Event-Condition-Action(ECA) rules that can be automatically triggered by an active database. Also this paper proposed a knowledge acquisition method using data mining techniques to obtain the integrity of the alarm knowledge

  6. A web accessible scientific workflow system for vadoze zone performance monitoring: design and implementation examples

    Science.gov (United States)

    Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.

    2005-12-01

    Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser

  7. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    Directory of Open Access Journals (Sweden)

    Wen-Jye Shyr

    2013-02-01

    Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long‐distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.

  8. Hydroponics Database and Handbook for the Advanced Life Support Test Bed

    Science.gov (United States)

    Nash, Allen J.

    1999-01-01

    During the summer 1998, I did student assistance to Dr. Daniel J. Barta, chief plant growth expert at Johnson Space Center - NASA. We established the preliminary stages of a hydroponic crop growth database for the Advanced Life Support Systems Integration Test Bed, otherwise referred to as BIO-Plex (Biological Planetary Life Support Systems Test Complex). The database summarizes information from published technical papers by plant growth experts, and it includes bibliographical, environmental and harvest information based on plant growth under varying environmental conditions. I collected 84 lettuce entries, 14 soybean, 49 sweet potato, 16 wheat, 237 white potato, and 26 mix crop entries. The list will grow with the publication of new research. This database will be integrated with a search and systems analysis computer program that will cross-reference multiple parameters to determine optimum edible yield under varying parameters. Also, we have made preliminary effort to put together a crop handbook for BIO-Plex plant growth management. It will be a collection of information obtained from experts who provided recommendations on a particular crop's growing conditions. It includes bibliographic, environmental, nutrient solution, potential yield, harvest nutritional, and propagation procedure information. This handbook will stand as the baseline growth conditions for the first set of experiments in the BIO-Plex facility.

  9. Optimized CANDU-6 cell and reactivity device supercell models for advanced fuels reactor database generation

    International Nuclear Information System (INIS)

    Highlights: • Propose an optimize 2-D model for CANDU lattice cell. • Propose a new 3-D simulation model for CANDU reactivity devices. • Implement other acceleration techniques for reactivity device simulations. • Reactivity device incremental cross sections for advanced CANDU fuels with thorium. - Abstract: Several 2D cell and 3D supercell models for reactivity device simulation have been proposed along the years for CANDU-6 reactors to generate 2-group cross section databases for finite core calculations in diffusion. Although these models are appropriate for natural uranium fuel, they are either too approximate or too expensive in terms of computer time to be used for optimization studies of advanced fuel cycles. Here we present a method to optimize the 2D spatial mesh to be used for a collision probability solution of the transport equation for CANDU cells. We also propose a technique to improve the modeling and accelerate the evaluation, in deterministic transport theory, of the incremental cross sections and diffusion coefficients associated with reactivity devices required for reactor calculations

  10. Utilization technique for advanced nuclear materials database system Data-Free-Way'

    International Nuclear Information System (INIS)

    Four organizations the National Research Institute for Metals (NRIM), the Japan Atomic Energy Research Institute (JAERI), the Japan Nuclear Fuel Cycle Development Institute (JNC) and Japan Science and Technology Incorporation (JST), conducted the 2nd period joint research for the purpose of development of utilization techniques for advanced nuclear materials database system named 'Data-Free-Way' (DFW), to make more useful system to support research and development of the nuclear materials, from FY 1995 to FY 1999. NRIM intended to fill a data system on diffusion and nuclear data by developing utilization technique on diffusion informations of steels and aluminum and nuclear data for materials for its independent system together with participating in fulfil of the DFW. And, NRIM has entered to a project on wide area band circuit application agreed at the G7 by using technologies cultivated by NRIM, to investigate network application technology with the Michigan State University over the sea under cooperation assistant business of JST, to make results on CCT diagram for welding and forecasting of welding heat history accumulated at NRIM for a long term, to perform development of a simulator assisting optimum condition decision of welding. (G.K.)

  11. Ranking Query Results in Web Databases using Advanced User and Query Dependent Approach

    Directory of Open Access Journals (Sweden)

    R.Madhukanth1 , K. Durga Prasad2 , Betam Suresh

    2013-08-01

    Full Text Available - Data mining is process used by companies to turn raw data into useful information. Data mining can be a cause for concern when only selected information, which is not representative of the overall sample group, is used to prove a certain hypothesis. There are many algorithms which are applied to get the exact data which is used by the user. Most common example which everyone uses in daily wise is Google search engine. Google is a search engine which has an efficient data mining algorithm written in it and because of which most of the users use Google search engine even though we have many other search engines. Search engine has got two types of searches i.e. User search and Query search. We are going to discuss about these two techniques and also going to show an advanced search where we can tell that it is more efficient than the existing techniques. Data Mining is employed here and also the ranking algorithm is used for the query searches done by the user on a particular data from the raw database. To get the efficient output expected by user we have used the Ranking algorithm which is associated with the workload.

  12. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  13. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  14. Dynamic Grouping of Web Users Based on Their Web Access Patterns using ART1 Neural Network Clustering Algorithm

    CERN Document Server

    Ramya, C; Shreedhara, K S

    2012-01-01

    In this paper, we propose ART1 neural network clustering algorithm to group users according to their Web access patterns. We compare the quality of clustering of our ART1 based clustering technique with that of the K-Means and SOM clustering algorithms in terms of inter-cluster and intra-cluster distances. The results show the average inter-cluster distance of ART1 is high compared to K-Means and SOM when there are fewer clusters. As the number of clusters increases, average inter-cluster distance of ART1 is low compared to K-Means and SOM which indicates the high quality of clusters formed by our approach.

  15. Design and advancement of component reliability database management system for NPP

    International Nuclear Information System (INIS)

    KAERI is constructing the component reliability database for YGN4 nuclear power plant. This paper describes the development of data management tool, data statistics tool, and query expert tool, which run for component reliability database. This is running under intranet environment and is used to analyze the failure mode and failure severity to compute the component failure rate. We have calculated YGN4 component failure rate with this tools and we are analyzing YGN3 component data using this database management tool

  16. Factors Influencing Webmasters and the Level of Web Accessibility and Section 508 Compliance at SACS Accredited Postsecondary Institutions: A Study Using the Theory of Planned Behavior

    Science.gov (United States)

    Freeman, Misty Danielle

    2013-01-01

    The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…

  17. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    OpenAIRE

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pi...

  18. Advanced technologies for scalable ATLAS conditions database access on the grid

    Science.gov (United States)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  19. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  20. The new ALICE DQM client: a web access to ROOT-based objects

    Science.gov (United States)

    von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.

    2015-12-01

    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems. An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface. The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I/O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application. We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments.

  1. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  2. Recent Advances in the MagIC Online Database: Rock- and Paleomagnetic Data Archiving, Analysis, and Visualization

    Science.gov (United States)

    Minnett, R.; Koppers, A. A.; Tauxe, L.; Constable, C.

    2010-12-01

    The Magnetics Information Consortium (MagIC) is deeply committed to empowering the paleomagnetic, rock magnetic, and affiliated scientific communities with an invaluable wealth of peer-reviewed published raw data and interpretations, along with online analytics and visualization tools. The MagIC Online Database (http://earthref.org/MAGIC/) has been designed and implemented with the goal of rapidly advancing science by providing the scientific community with a free and easily accessible means for attacking some of the most challenging research problems in Earth sciences. Such a database must not only allow for data to be contributed and indefinitely archived, but also provide a powerful suite of highly integrated tools for data retrieval, analysis, and visualization. MagIC has already successfully addressed many of the issues of contributing and archiving vast quantities of heterogeneous data by creating a flexible and comprehensive Oracle Database schema professionally maintained at the San Diego Supercomputer Center (SDSC) with off-site back-ups at the College of Oceanic and Atmospheric Sciences (COAS) in Oregon. MagIC is now focused on developing and improving the tools for retrieving and visualizing the data from over four thousand published rock- and paleomagnetic studies. New features include a much more responsive online interface, result set filtering, integrated and asynchronous plotting and mapping, advanced saving options, and a rich personalized tabular layout. The MagIC Database is continuously striving to enrich and promote Rock- and Paleomagnetic research by providing the scientific community with the tools for retrieving and analyzing previous studies, and for organizing and collaborating on new activities. The MagIC Database Search Interface

  3. Recent advances in the compilation of holocene relative Sea-level database in North America

    Science.gov (United States)

    Horton, B.; Vacchi, M.; Engelhart, S. E.; Nikitina, D.

    2015-12-01

    Reconstruction of relative sea level (RSL) has implications for investigation of crustal movements, calibration of earth rheology models and the reconstruction of ice sheets. In recent years, efforts were made to create RSL databases following a standardized methodology. These regional databases provided a framework for developing our understanding of the primary mechanisms of RSL change since the Last Glacial Maximum and a long-term baseline against which to gauge changes in sea-level during the 20th century and forecasts for the 21st. Here we present two quality-controlled Holocene RSL database compiled for North America. Along the Pacific coast of North America (British Columbia, Canada to California, USA), our re-evaluation of sea-level indicators from geological and archaeological investigations yield 841 RSL data-points mainly from salt and freshwater wetlands or adjacent estuarine sediment as well as from isolation basin. Along the Atlantic coast of North America (Hudson Bay, Canada to South Carolina, USA), we are currently compiling a database including more than 2000 RSL data-points from isolation basin, salt and freshwater wetlands, beach ridges and intratidal deposits. We outline the difficulties and solutions we made to compile databases in such different depostional environment. We address complex tectonics and the framework to compare such large variability of RSL data-point. We discuss the implications of our results for the glacio-isostatic adjustment (GIA) models in the two studied regions.

  4. mlstdbNet – distributed multi-locus sequence typing (MLST) databases

    OpenAIRE

    Maiden Martin CJ; Chan Man-Suen; Jolley Keith A

    2004-01-01

    Abstract Background Multi-locus sequence typing (MLST) is a method of typing that facilitates the discrimination of microbial isolates by comparing the sequences of housekeeping gene fragments. The mlstdbNet software enables the implementation of distributed web-accessible MLST databases that can be linked widely over the Internet. Results The software enables multiple isolate databases to query a single profiles database that contains allelic profile and sequence definitions. This separation...

  5. Neuron-Miner: An Advanced Tool for Morphological Search and Retrieval in Neuroscientific Image Databases.

    Science.gov (United States)

    Conjeti, Sailesh; Mesbah, Sepideh; Negahdar, Mohammadreza; Rautenberg, Philipp L; Zhang, Shaoting; Navab, Nassir; Katouzian, Amin

    2016-10-01

    The steadily growing amounts of digital neuroscientific data demands for a reliable, systematic, and computationally effective retrieval algorithm. In this paper, we present Neuron-Miner, which is a tool for fast and accurate reference-based retrieval within neuron image databases. The proposed algorithm is established upon hashing (search and retrieval) technique by employing multiple unsupervised random trees, collectively called as Hashing Forests (HF). The HF are trained to parse the neuromorphological space hierarchically and preserve the inherent neuron neighborhoods while encoding with compact binary codewords. We further introduce the inverse-coding formulation within HF to effectively mitigate pairwise neuron similarity comparisons, thus allowing scalability to massive databases with little additional time overhead. The proposed hashing tool has superior approximation of the true neuromorphological neighborhood with better retrieval and ranking performance in comparison to existing generalized hashing methods. This is exhaustively validated by quantifying the results over 31266 neuron reconstructions from Neuromorpho.org dataset curated from 147 different archives. We envisage that finding and ranking similar neurons through reference-based querying via Neuron Miner would assist neuroscientists in objectively understanding the relationship between neuronal structure and function for applications in comparative anatomy or diagnosis. PMID:27155864

  6. Principles and techniques in the design of ADMS+. [advanced data-base management system

    Science.gov (United States)

    Roussopoulos, Nick; Kang, Hyunchul

    1986-01-01

    'ADMS+/-' is an advanced data base management system whose architecture integrates the ADSM+ mainframe data base system with a large number of work station data base systems, designated ADMS-; no communications exist between these work stations. The use of this system radically decreases the response time of locally processed queries, since the work station runs in a single-user mode, and no dynamic security checking is required for the downloaded portion of the data base. The deferred update strategy used reduces overhead due to update synchronization in message traffic.

  7. The Spanish National Reference Database for Ionizing Radiations (BANDRRI)

    International Nuclear Information System (INIS)

    The Spanish National Reference Database for Ionizing Radiations (BANDRRI) is being implemented by a research team in the frame of a joint project between CIEMAT (Unidad de Metrologia de Radiaciones Ionizantes and Direccion de Informatica) and the Universidad Nacional de Educacion a Distancia (UNED, Departamento de Mecanica y Departamento de Fisica de Materiales). This paper presents the main objectives of BANDRRI, its dynamic and relational data base structure, interactive Web accessibility and its main radionuclide-related contents at this moment

  8. The Spanish National Reference Database for Ionizing Radiations (BANDRRI)

    Energy Technology Data Exchange (ETDEWEB)

    Los Arcos, J.M. E-mail: arcos@ciemat.es; Bailador, A.; Gonzalez, A.; Gonzalez, C.; Gorostiza, C.; Ortiz, F.; Sanchez, E.; Shaw, M.; Williart, A

    2000-03-01

    The Spanish National Reference Database for Ionizing Radiations (BANDRRI) is being implemented by a research team in the frame of a joint project between CIEMAT (Unidad de Metrologia de Radiaciones Ionizantes and Direccion de Informatica) and the Universidad Nacional de Educacion a Distancia (UNED, Departamento de Mecanica y Departamento de Fisica de Materiales). This paper presents the main objectives of BANDRRI, its dynamic and relational data base structure, interactive Web accessibility and its main radionuclide-related contents at this moment.

  9. Heart research advances using database search engines, Human Protein Atlas and the Sydney Heart Bank.

    Science.gov (United States)

    Li, Amy; Estigoy, Colleen; Raftery, Mark; Cameron, Darryl; Odeberg, Jacob; Pontén, Fredrik; Lal, Sean; Dos Remedios, Cristobal G

    2013-10-01

    This Methodological Review is intended as a guide for research students who may have just discovered a human "novel" cardiac protein, but it may also help hard-pressed reviewers of journal submissions on a "novel" protein reported in an animal model of human heart failure. Whether you are an expert or not, you may know little or nothing about this particular protein of interest. In this review we provide a strategic guide on how to proceed. We ask: How do you discover what has been published (even in an abstract or research report) about this protein? Everyone knows how to undertake literature searches using PubMed and Medline but these are usually encyclopaedic, often producing long lists of papers, most of which are either irrelevant or only vaguely relevant to your query. Relatively few will be aware of more advanced search engines such as Google Scholar and even fewer will know about Quertle. Next, we provide a strategy for discovering if your "novel" protein is expressed in the normal, healthy human heart, and if it is, we show you how to investigate its subcellular location. This can usually be achieved by visiting the website "Human Protein Atlas" without doing a single experiment. Finally, we provide a pathway to discovering if your protein of interest changes its expression level with heart failure/disease or with ageing. PMID:23856366

  10. Mining Sequential Patterns in Dense Databases

    Directory of Open Access Journals (Sweden)

    Karam Gouda

    2011-03-01

    Full Text Available Sequential pattern mining is an important data mining problem with broad applications, including theanalysis of customer purchase patterns, Web access patterns, DNA analysis, and so on. We show ondense databases, a typical algorithm like Spade algorithm tends to lose its efficiency. Spade is based onthe used of lists containing the localization of the occurrences of pattern in the sequences and these listsare not appropriated in the case of dense databases. In this paper we present an adaptation of the wellknowndiffset data representation [12] with Spade algorithm. The new version is called dSpade. Sincediffset shows high performance for mining frequent itemsets in dense transactional databases,experimental evaluation shows that dSpade is suitable for mining dense sequence databases.

  11. What do you see in a digital color dot picture such as the Ishihara pseudo-isochromatic plates? Web Accessibility Palette (WAP)

    Science.gov (United States)

    Ichihara, Yasuyo G.

    2000-12-01

    Internet imaging is used as interactive visual communication. It is different form other electronic imaging fields because the imaging is transported from one client to many others. If you and I each had different color vision, we may see Internet Imaging differently. So what do you see in a digital color dot picture such as the Ishihara pseudoisochromatic plates? The ishihara pseudoisochromatic test is the most widely used screening test for red-green color deficiency. The full verison contains 38 plates. Plates 18-21 are hidden digit designs. For example, plate 20 has 45 hidden digit designs that cannot be seen by normal trichromats but can be distinguished by most color deficient observers. In this study, we present a new digital color pallette. This is the web accessibility palette where the same information on Internet imaging can be seen correctly by any color vision person. For this study, we have measured the Ishihara pseudoisochromatic test. We used the new Minolta 2D- colorimeter system, CL1040i that can define all pixels in a 4cm x 4cm square to take measurements. From the results, color groups of 8 to 10 colors in the Ishihara plates can be seen on isochromatic lines of CIE-xy color spaces. On each plate, the form of a number is composed of 4 colors and the background colors are composed of the remaining 5 colors. Normal trichromats, it is difficult to find the difference between the 4 color group which makes up the form of the number and the 5 color group of the background colors. We also found that for normal trichromats, colors like orange and red that are highly salient are included in the warm color group and are distinguished form the cool color group of blue, green and gray. Form the results of our analysis of the Ishihara pseudoisochromatic test we suggest the web accessibility palette consists of 4 colors.

  12. Generation of pedigree diagrams for web display using scalable vector graphics from a clinical trials database.

    OpenAIRE

    Fernando, S. K.; Brandt, C.; Nadkarni, P.

    2001-01-01

    The standard method of studying inherited disease is to observe its pattern of distribution in families, that is, its pattern in a pedigree. For clinical studies focused on inherited disease, a pedigree diagram is a valuable visual tool for the display of inheritance patterns. We describe the creation of a web-based pedigree display module for Trial/DB, a Web accessible database developed at the Yale Center for Medical Informatics (YCMI) to support clinical research studies. The pedigree diag...

  13. GeneLink: a database to facilitate genetic studies of complex traits

    OpenAIRE

    Wolfsberg Tyra G; Trout Ken; Ibay Grace; Freas-Lutz Diana; Klein Alison P; Jones Mary; Duggal Priya; Umayam Lowell; Gildea Derek; Masiello Anthony; Gillanders Elizabeth M; Trent Jeffrey M; Bailey-Wilson Joan E; Baxevanis Andreas D

    2004-01-01

    Abstract Background In contrast to gene-mapping studies of simple Mendelian disorders, genetic analyses of complex traits are far more challenging, and high quality data management systems are often critical to the success of these projects. To minimize the difficulties inherent in complex trait studies, we have developed GeneLink, a Web-accessible, password-protected Sybase database. Results GeneLink is a powerful tool for complex trait mapping, enabling genotypic data to be easily merged wi...

  14. KALIMER database development

    International Nuclear Information System (INIS)

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  15. An Enhanced Framework with Advanced Study to Incorporate the Searching of E-Commerce Products Using Modernization of Database Queries

    Directory of Open Access Journals (Sweden)

    Mohd Muntjir

    2016-05-01

    Full Text Available This study aims to inspect and evaluate the integration of database queries and their use in e-commerce product searches. It has been observed that e-commerce is one of the most prominent trends, which have been emerged in the business world, for the past decade. E-commerce has gained tremendous popularity, as it offers higher flexibility, cost efficiency, effectiveness, and convenience, to both, consumers and businesses. Large number of retailing companies has adopted this technology, in order to expand their operations, across of the globe; hence they needs to have highly responsive and integrated databases. In this regard, the approach of database queries is found to be the most appropriate and adequate techniques, as it simplifies the searches of e-commerce products.

  16. NATIONAL CARBON SEQUESTRATION DATABASE AND GEOGRAPHIC INFORMATION SYSTEM (NATCARB) FORMER TITLE-MIDCONTINENT INTERACTIVE DIGITAL CARBON ATLAS AND RELATIONAL DATABASE (MIDCARB)

    Energy Technology Data Exchange (ETDEWEB)

    Timothy R. Carr

    2004-07-16

    This annual report describes progress in the third year of the three-year project entitled ''Midcontinent Interactive Digital Carbon Atlas and Relational Database (MIDCARB)''. The project assembled a consortium of five states (Indiana, Illinois, Kansas, Kentucky and Ohio) to construct an online distributed Relational Database Management System (RDBMS) and Geographic Information System (GIS) covering aspects of carbon dioxide (CO{sub 2}) geologic sequestration (http://www.midcarb.org). The system links the five states in the consortium into a coordinated regional database system consisting of datasets useful to industry, regulators and the public. The project has been extended and expanded as a ''NATional CARBon Sequestration Database and Geographic Information System (NATCARB)'' to provide national coverage across the Regional CO{sub 2} Partnerships, which currently cover 40 states (http://www.natcarb.org). Advanced distributed computing solutions link database servers across the five states and other publicly accessible servers (e.g., USGS) into a single system where data is maintained and enhanced at the local level but is accessed and assembled through a single Web portal and can be queried, assembled, analyzed and displayed. This project has improved the flow of data across servers and increased the amount and quality of available digital data. The online tools used in the project have improved in stability and speed in order to provide real-time display and analysis of CO{sub 2} sequestration data. The move away from direct database access to web access through eXtensible Markup Language (XML) has increased stability and security while decreasing management overhead. The MIDCARB viewer has been simplified to provide improved display and organization of the more than 125 layers and data tables that have been generated as part of the project. The MIDCARB project is a functional demonstration of distributed management of

  17. An Enhanced Framework with Advanced Study to Incorporate the Searching of E-Commerce Products Using Modernization of Database Queries

    OpenAIRE

    Mohd Muntjir; Ahmad Tasnim Siddiqui

    2016-01-01

    This study aims to inspect and evaluate the integration of database queries and their use in e-commerce product searches. It has been observed that e-commerce is one of the most prominent trends, which have been emerged in the business world, for the past decade. E-commerce has gained tremendous popularity, as it offers higher flexibility, cost efficiency, effectiveness, and convenience, to both, consumers and businesses. Large number of retailing companies has adopted this technology, in ord...

  18. Persistence of Hyperinvasive Meningococcal Strain Types during Global Spread as Recorded in the PubMLST Database

    OpenAIRE

    Eleanor R. Watkins; Maiden, Martin C. J.

    2012-01-01

    Neisseria meningitidis is a major cause of septicaemia and meningitis worldwide. Most disease in Europe, the Americas and Australasia is caused by meningococci expressing serogroup B capsules, but no vaccine against this polysaccharide exists. Potential candidates for ‘serogroup B substitute’ vaccines are outer membrane protein antigens including the typing antigens PorA and FetA. The web-accessible PubMLST database (www.pubmlst.org) was used to investigate the temporal and geographical patte...

  19. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    Science.gov (United States)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  20. The Salmonella In Silico Typing Resource (SISTR: An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Directory of Open Access Journals (Sweden)

    Catherine E Yoshida

    Full Text Available For nearly 100 years serotyping has been the gold standard for the identification of Salmonella serovars. Despite the increasing adoption of DNA-based subtyping approaches, serotype information remains a cornerstone in food safety and public health activities aimed at reducing the burden of salmonellosis. At the same time, recent advances in whole-genome sequencing (WGS promise to revolutionize our ability to perform advanced pathogen characterization in support of improved source attribution and outbreak analysis. We present the Salmonella In Silico Typing Resource (SISTR, a bioinformatics platform for rapidly performing simultaneous in silico analyses for several leading subtyping methods on draft Salmonella genome assemblies. In addition to performing serovar prediction by genoserotyping, this resource integrates sequence-based typing analyses for: Multi-Locus Sequence Typing (MLST, ribosomal MLST (rMLST, and core genome MLST (cgMLST. We show how phylogenetic context from cgMLST analysis can supplement the genoserotyping analysis and increase the accuracy of in silico serovar prediction to over 94.6% on a dataset comprised of 4,188 finished genomes and WGS draft assemblies. In addition to allowing analysis of user-uploaded whole-genome assemblies, the SISTR platform incorporates a database comprising over 4,000 publicly available genomes, allowing users to place their isolates in a broader phylogenetic and epidemiological context. The resource incorporates several metadata driven visualizations to examine the phylogenetic, geospatial and temporal distribution of genome-sequenced isolates. As sequencing of Salmonella isolates at public health laboratories around the world becomes increasingly common, rapid in silico analysis of minimally processed draft genome assemblies provides a powerful approach for molecular epidemiology in support of public health investigations. Moreover, this type of integrated analysis using multiple sequence

  1. The Salmonella In Silico Typing Resource (SISTR): An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Science.gov (United States)

    Yoshida, Catherine E; Kruczkiewicz, Peter; Laing, Chad R; Lingohr, Erika J; Gannon, Victor P J; Nash, John H E; Taboada, Eduardo N

    2016-01-01

    For nearly 100 years serotyping has been the gold standard for the identification of Salmonella serovars. Despite the increasing adoption of DNA-based subtyping approaches, serotype information remains a cornerstone in food safety and public health activities aimed at reducing the burden of salmonellosis. At the same time, recent advances in whole-genome sequencing (WGS) promise to revolutionize our ability to perform advanced pathogen characterization in support of improved source attribution and outbreak analysis. We present the Salmonella In Silico Typing Resource (SISTR), a bioinformatics platform for rapidly performing simultaneous in silico analyses for several leading subtyping methods on draft Salmonella genome assemblies. In addition to performing serovar prediction by genoserotyping, this resource integrates sequence-based typing analyses for: Multi-Locus Sequence Typing (MLST), ribosomal MLST (rMLST), and core genome MLST (cgMLST). We show how phylogenetic context from cgMLST analysis can supplement the genoserotyping analysis and increase the accuracy of in silico serovar prediction to over 94.6% on a dataset comprised of 4,188 finished genomes and WGS draft assemblies. In addition to allowing analysis of user-uploaded whole-genome assemblies, the SISTR platform incorporates a database comprising over 4,000 publicly available genomes, allowing users to place their isolates in a broader phylogenetic and epidemiological context. The resource incorporates several metadata driven visualizations to examine the phylogenetic, geospatial and temporal distribution of genome-sequenced isolates. As sequencing of Salmonella isolates at public health laboratories around the world becomes increasingly common, rapid in silico analysis of minimally processed draft genome assemblies provides a powerful approach for molecular epidemiology in support of public health investigations. Moreover, this type of integrated analysis using multiple sequence-based methods of sub

  2. GALT protein database: querying structural and functional features of GALT enzyme.

    Science.gov (United States)

    d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna

    2014-09-01

    Knowledge of the impact of variations on protein structure can enhance the comprehension of the mechanisms of genetic diseases related to that protein. Here, we present a new version of GALT Protein Database, a Web-accessible data repository for the storage and interrogation of structural effects of variations of the enzyme galactose-1-phosphate uridylyltransferase (GALT), the impairment of which leads to classic Galactosemia, a rare genetic disease. This new version of this database now contains the models of 201 missense variants of GALT enzyme, including heterozygous variants, and it allows users not only to retrieve information about the missense variations affecting this protein, but also to investigate their impact on substrate binding, intersubunit interactions, stability, and other structural features. In addition, it allows the interactive visualization of the models of variants collected into the database. We have developed additional tools to improve the use of the database by nonspecialized users. This Web-accessible database (http://bioinformatica.isa.cnr.it/GALT/GALT2.0) represents a model of tools potentially suitable for application to other proteins that are involved in human pathologies and that are subjected to genetic variations. PMID:24990533

  3. CircuitsDB: a database of mixed microRNA/transcription factor feed-forward regulatory circuits in human and mouse

    OpenAIRE

    Friard Olivier; Re Angela; Taverna Daniela; De Bortoli Michele; Corá Davide

    2010-01-01

    Abstract Background Transcription Factors (TFs) and microRNAs (miRNAs) are key players for gene expression regulation in higher eukaryotes. In the last years, a large amount of bioinformatic studies were devoted to the elucidation of transcriptional and post-transcriptional (mostly miRNA-mediated) regulatory interactions, but little is known about the interplay between them. Description Here we describe a dynamic web-accessible database, CircuitsDB, supporting a genome-wide transcriptional an...

  4. Genomic Databases for Crop Improvement

    Directory of Open Access Journals (Sweden)

    David Edwards

    2012-03-01

    Full Text Available Genomics is playing an increasing role in plant breeding and this is accelerating with the rapid advances in genome technology. Translating the vast abundance of data being produced by genome technologies requires the development of custom bioinformatics tools and advanced databases. These range from large generic databases which hold specific data types for a broad range of species, to carefully integrated and curated databases which act as a resource for the improvement of specific crops. In this review, we outline some of the features of plant genome databases, identify specific resources for the improvement of individual crops and comment on the potential future direction of crop genome databases.

  5. Database replication

    OpenAIRE

    Popov, P. T.; Stankovic, V.

    2014-01-01

    A fault-tolerant node for synchronous heterogeneous database replication and a method for performing a synchronous heterogenous database replication at such a node are provided. A processor executes a computer program to generate a series of database transactions to be carried out at the fault-tolerant node. The fault-tolerant node comprises at least two relational database management systems, each of which are different relational database management system products, each implementing snapsh...

  6. Communicative Databases

    OpenAIRE

    Yu, Kwang-I

    1981-01-01

    A hierarchical organization stores its information in a la rge number of databases. These databases are interrelated , forming a closely-coupled database system. Traditional information systems and current database management systems do not have a means of expressing these relationships. This thesis describes a model of the information structure of the hierarchical organization that identifies the nature of database relationships. It also describes the design and implementatio...

  7. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  8. Pathbase: A new reference resource and database for laboratory mouse pathology

    International Nuclear Information System (INIS)

    Pathbase (http:/www.pathbase.net) is a web accessible database of histopathological images of laboratory mice, developed as a resource for the coding and archiving of data derived from the analysis of mutant or genetically engineered mice and their background strains. The metadata for the images, which allows retrieval and inter-operability with other databases, is derived from a series of orthogonal ontologies, and controlled vocabularies. One of these controlled vocabularies, MPATH, was developed by the Pathbase Consortium as a formal description of the content of mouse histopathological images. The database currently has over 1000 images on-line with 2000 more under curation and presents a paradigm for the development of future databases dedicated to aspects of experimental biology. (authors)

  9. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases...

  10. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  12. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  13. Constraint Databases and Geographic Information Systems

    OpenAIRE

    Revesz, Peter

    2007-01-01

    Constraint databases and geographic information systems share many applications. However, constraint databases can go beyond geographic information systems in efficient spatial and spatiotemporal data handling methods and in advanced applications. This survey mainly describes ways that constraint databases go beyond geographic information systems. However, the survey points out that in some areas constraint databases can learn also from geographic information systems.

  14. GIS for the Gulf: A reference database for hurricane-affected areas: Chapter 4C in Science and the storms-the USGS response to the hurricanes of 2005

    Science.gov (United States)

    Greenlee, Dave

    2007-01-01

    A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.

  15. RNA FRABASE version 1.0: an engine with a database to search for the three-dimensional fragments within RNA structures

    OpenAIRE

    Popenda, Mariusz; Błażewicz, Marek; Szachniuk, Marta; Adamiak, Ryszard W.

    2007-01-01

    The RNA FRABASE is a web-accessible engine with a relational database, which allows for the automatic search of user-defined, 3D RNA fragments within a set of RNA structures. This is a new tool to search and analyse RNA structures, directed at the 3D structure modelling. The user needs to input either RNA sequence(s) and/or secondary structure(s) given in a ‘dot-bracket’ notation. The algorithm searching for the requested 3D RNA fragments is very efficient. As of August 2007, the database con...

  16. A new Web accessing database modules basing in security of information%一种新的基于信息安全的Web访问数据库模型

    Institute of Scientific and Technical Information of China (English)

    周文

    2008-01-01

    该文从增强Web访问安全性出发,对C/S,B/S传统的Web访问模式介绍,从而提出了一种新web的访问模式,来改进传统的访问模式,提高网络访问的安全性,并阐述了新的Web访问模式的设计、流程以及访问过程.

  17. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M;

    2008-01-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The...... predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8–11 for...... all 122 alleles. artificial neural network predictions are given as actual IC50 values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has...

  18. DATABASE AND ANALYTICAL TOOL DEVELOPMENT FOR THE MANAGEMENT OF DATA DERIVED FROM US DOE (NETL) FUNDED FINE PARTICULATE (PM2.5) RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Charles G. Crawford

    2003-03-13

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its sixth month of Phase

  19. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  20. Maize databases

    Science.gov (United States)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  1. OxDBase: a database of oxygenases involved in biodegradation

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2009-04-01

    Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.

  2. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  3. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  4. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  5. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  6. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  7. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May...

  8. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  9. The STARK-B database as a resource for \\textquotedblleft STARK" widths and shifts data: State of advancement and program of development

    CERN Document Server

    Sahal-Bréchot, Sylvie; Moreau, Nicolas; Nessib, Nabil Ben

    2013-01-01

    \\textquotedblleft Stark" broadening theories and calculations have been extensively developed for about 50 years and can now be applied to many needs, especially for accurate spectroscopic diagnostics and modeling. This requires the knowledge of numerous collisional line profiles. Nowadays, the access to such data via an online database becomes essential. STARK-B is a collaborative project between the Astronomical Observatory of Belgrade and the Laboratoire d'\\'Etude du Rayonnement et de la mati\\`ere en Astrophysique (LERMA). It is a database of calculated widths and shifts of isolated lines of atoms and ions due to electron and ion collisions (impacts). It is devoted to modeling and spectroscopic diagnostics of stellar atmospheres and envelopes, laboratory plasmas, laser equipments and technological plasmas. Hence, the domain of temperatures and densities covered by the tables is wide and depends on the ionization degree of the considered ion. STARK-B has been fully opened since September 2008 and is in free...

  10. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  11. Revolutionary Database Technology for Data Intensive Research

    OpenAIRE

    2012-01-01

    The ability to explore huge digital resources assembled in data warehouses, databases and files, at unprecedented speed, is becoming the driver of progress in science. However, existing database management systems (DBMS) are far from capable of meeting the scientists' requirements. The Database Architectures group at CWI in Amsterdam cooperates with astronomers, seismologists and other domain experts to tackle this challenge by advancing all aspects of database technology. The group’s researc...

  12. mlstdbNet – distributed multi-locus sequence typing (MLST databases

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2004-07-01

    Full Text Available Abstract Background Multi-locus sequence typing (MLST is a method of typing that facilitates the discrimination of microbial isolates by comparing the sequences of housekeeping gene fragments. The mlstdbNet software enables the implementation of distributed web-accessible MLST databases that can be linked widely over the Internet. Results The software enables multiple isolate databases to query a single profiles database that contains allelic profile and sequence definitions. This separation enables isolate databases to be established by individual laboratories, each customised to the needs of the particular project and with appropriate access restrictions, while maintaining the benefits of a single definitive source of profile and sequence information. Databases are described by an XML file that is parsed by a Perl CGI script. The software offers a large number of ways to query the databases and to further break down and export the results generated. Additional features can be enabled by installing third-party (freely available tools. Conclusion Development of a distributed structure for MLST databases offers scalability and flexibility, allowing participating centres to maintain ownership of their own data, without introducing duplication and data integrity issues.

  13. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2012-04-01

    PEP725 is a 5 years project with the main object to promote and facilitate phenological research by delivering a pan European phenological database with an open, unrestricted data access for science, research and education. PEP725 is funded by EUMETNET (the network of European meteorological services), ZAMG and the Austrian ministry for science & research bm:w_f. So far 16 European national meteorological services and 7 partners from different nati-onal phenological network operators have joined PEP725. The data access is very easy via web-access from the homepage www.pep725.eu. Ha-ving accepted the PEP725 data policy and registry the data download can be done by different criteria as for instance the selection of a specific plant or all data from one country. At present more than 300 000 new records are available in the PEP725 data-base coming from 31 European countries and from 8150 stations. For some more sta-tions (154) META data (location and data holder) are provided. Links to the network operators and data owners are also on the webpage in case you have more sophisticated questions about the data. Another objective of PEP725 is to bring together network-operators and scientists by organizing workshops. In April 2012 the second of these workshops will take place on the premises of ZAMG. Invited speakers will give presentations spanning the whole study area of phenology starting from observations to modelling. Quality checking is also a big issue. At the moment we study the literature to find ap-propriate methods.

  14. Cytoreductive Surgery plus Hyperthermic Intraperitoneal Chemotherapy to Treat Advanced/Recurrent Epithelial Ovarian Cancer: Results from a Retrospective Study on Prospectively Established Database

    Directory of Open Access Journals (Sweden)

    Jian-Hua Sun

    2016-04-01

    Full Text Available BACKGROUND: Despite the best standard treatment, optimal cytoreductive surgery (CRS and platinum/taxane-based chemotherapy, prognosis of advanced epithelial ovarian carcinoma (EOC remains poor. Recently, CRS plus hyperthermic intraperitoneal chemotherapy (HIPEC has been developed to treat peritoneal carcinomatosis (PC. This study was to evaluate the efficacy and safety of CRS+HIPEC to treat PC from advanced/recurrent EOC. METHODS: Forty-six PC patients from advanced EOC (group A or recurrent EOC (group B were treated by 50 CRS+HIPEC procedures. The primary endpoints were progression-free survival (PFS and overall survival (OS; the secondary endpoints were safety profiles. RESULTS: The median OS was 74.0 months [95% confidence interval (CI 8.5-139.5] for group A versus 57.5 months (95% CI 29.8-85.2 for group B (P = .68. The median PFS was not reached for group A versus 8.5 months (95% CI 0-17.5 for group B (P = .034. Better median OS correlated with peritoneal cancer index (PCI 20 group, P = .01, complete cyroreduction (residual disease ≤ 2.5 mm [79.5 months for completeness of cytoreduction (CC score 0-1 vs 24.3 months for CC 2-3, P = .00], and sensitivity to platinum (65.3 months for platinum-sensitive group vs 20.0 for platinum-resistant group, P = .05. Serious adverse events occurred in five patients (10.0%. Multivariate analysis identified CC score as the only independent factor for better survival. CONCLUSION: For advanced/recurrent EOC, CRS+HIPEC could improve OS with acceptable safety.

  15. Architecture and Functionality of the Advanced Life Support On-Line Project Information System

    Science.gov (United States)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriguez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Ames Research Center (ARC) to develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a Web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an research and technology development (R&TD) status information hub that can potentially serve as the primary annual reporting mechanism for ALS-funded projects. Using OPIS, ALS managers and element leads will be able to carry out informed R&TD investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, Controls and Systems Analysis). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  16. Architecture and Functionality of the Advanced Life Support On-Line Project Information System (OPIS)

    Science.gov (United States)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriquez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Amcs Research Center (ARC) tu develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL(Trademark) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an R&TD status information hub that can potentially serve as the primary annual reporting mechanism. Using OPIS, ALS managers and element leads will be able to carry out informed research and technology development investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, and Control). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  17. The essential nature of healthcare databases in critical care medicine

    OpenAIRE

    Martin, Greg S.

    2008-01-01

    Medical databases serve a critical function in healthcare, including the areas of patient care, administration, research and education. The quality and breadth of information collected into existing databases varies tremendously, between databases, between institutions and between national boundaries. The field of critical care medicine could be advanced substantially by the development of comprehensive and accurate databases.

  18. Database theory and SQL practice using Access

    International Nuclear Information System (INIS)

    This book introduces database theory and SQL practice using Access. It is comprised of seven chapters, which give description of understanding database with basic conception and DMBS, understanding relational database with examples of it, building database table and inputting data using access 2000, structured Query Language with introduction, management and making complex query using SQL, command for advanced SQL with understanding conception of join and virtual table, design on database for online bookstore with six steps and building of application with function, structure, component, understanding of the principle, operation and checking programming source for application menu.

  19. Teaching Advanced SQL Skills: Text Bulk Loading

    Science.gov (United States)

    Olsen, David; Hauser, Karina

    2007-01-01

    Studies show that advanced database skills are important for students to be prepared for today's highly competitive job market. A common task for database administrators is to insert a large amount of data into a database. This paper illustrates how an up-to-date, advanced database topic, namely bulk insert, can be incorporated into a database…

  20. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  1. 基于WAVE的常春藤盟校图书馆网站的可访问性评价研究%Web Accessibility in American IVY League Schools for People with Disabilities

    Institute of Scientific and Technical Information of China (English)

    景帅; 王颖纯; 刘燕权

    2016-01-01

    纵览针对残疾人的网站可访问性相关研究,理论文章多于实证调研,缺乏对顶尖大学图书馆网站的实证数据分析。为此文章选取美国8所常春藤盟校,研究其图书馆网站可访问性特点及相关政策是否符合美国1990年颁布的《美国残疾人法》(ADA),并通过WAVE和邮件问询进行调研评估。研究发现:虽然每个网站都有很好的可用性和可操作性,同时也均向残疾人提供了服务说明以及指向残疾服务的链接,特别是对视障者提供了屏幕阅读器等辅助技术以增强其可访性,但这些网站均有WAVE认定的6类准则违规缺陷中的一种或多种。最常见的问题为缺失文档语言(44%)、冗余链接(69%)、可疑链接(50%)、跳过导航标题(44%)等。%This study evaluates the websites of the US ’ Ivy League Schools for accessibility by users with disabilities to determine if they are compliant with accessibility standards established by the Americans with Disabilities Act (ADA). By using the web accessibility evaluator WAVE and email survey, the author found that among selected sixteen web-sites in the eight universities, each site has good availability and operability, and all the libraries ’ websites offer services to people with disabilities and links to disability services, especially offer screen readers and other assistive technologies to enhance the access of people with visual impairment. However, errors are present at all the websites. The most com-mon problems are the lack of missing document language (44%), redundant link (69%), suspicious link (50%), and skip navigation(44%).

  2. Object Oriented Databases: a reality

    Directory of Open Access Journals (Sweden)

    GALANTE, A. C.

    2009-06-01

    Full Text Available This article aims to demonstrate that the new technology of oriented objects database can be used and is fully available to developers who wish to start in the "entirely OO world”. It is shown that the basic concepts of oriented objects, which the main types of database and in order to give a more precise focus on the database oriented objects. Furthermore, it shows also the use of the database objects aimed at explaining the types of queries that can take place and how the conduct, showing that the OQL syntax is not as far of the syntax of SQL, but with more advanced features and facilitating the tracing data. All done with practical examples and easy to be understood.

  3. Fundamentals of Object Databases Object-Oriented and Object-Relational Design

    CERN Document Server

    Dietrich, Suzanne

    2010-01-01

    Object-oriented databases were originally developed as an alternative to relational database technology for the representation, storage, and access of non-traditional data forms that were increasingly found in advanced applications of database technology. After much debate regarding object-oriented versus relational database technology, object-oriented extensions were eventually incorporated into relational technology to create object-relational databases. Both object-oriented databases and object-relational databases, collectively known as object databases, provide inherent support for object

  4. QIS: A Framework for Biomedical Database Federation

    OpenAIRE

    Marenco, Luis; Wang, Tzuu-Yi; Shepherd, Gordon; Miller, Perry L.; Nadkarni, Prakash

    2004-01-01

    Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by ad...

  5. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  6. Cloud Database Database as a Service

    Directory of Open Access Journals (Sweden)

    Waleed Al Shehri

    2013-05-01

    Full Text Available Cloud computing has been the most adoptable technology in the recent times, and the database has alsomoved to cloud computing now, so we will look intothe details of database as a service and its functioning.This paper includes all the basic information aboutthe database as a service. The working of databaseas aservice and the challenges it is facing are discussed with an appropriate. The structure of database incloud computing and its working in collaboration with nodes is observed under database as a service. Thispaper also will highlight the important things to note down before adopting a database as a serviceprovides that is best amongst the other. The advantages and disadvantages of database as a service will letyou to decide either to use database as a service or not. Database as a service has already been adopted bymany e-commerce companies and those companies are getting benefits from this service.

  7. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  8. Unit 66 - Database Creation

    OpenAIRE

    Unit 61, CC in GIS; National Center for Geographic Information and Analysis (UC Santa Barbara, SUNY at Buffalo, University of Maine)

    1990-01-01

    This unit examines the planning and management issues involved in the physical creation of the database. It describes some issues in database creation, key hardware parameters of the system, partitioning the database for tiles and layers and converting data for the database. It illustrates these through an example from the Flathead National Forest in northwestern Montana, where a resource management database was required.

  9. DMTB: the magnetotactic bacteria database

    Science.gov (United States)

    Pan, Y.; Lin, W.

    2012-12-01

    Magnetotactic bacteria (MTB) are of interest in biogeomagnetism, rock magnetism, microbiology, biomineralization, and advanced magnetic materials because of their ability to synthesize highly ordered intracellular nano-sized magnetic minerals, magnetite or greigite. Great strides for MTB studies have been made in the past few decades. More than 600 articles concerning MTB have been published. These rapidly growing data are stimulating cross disciplinary studies in such field as biogeomagnetism. We have compiled the first online database for MTB, i.e., Database of Magnestotactic Bacteria (DMTB, http://database.biomnsl.com). It contains useful information of 16S rRNA gene sequences, oligonucleotides, and magnetic properties of MTB, and corresponding ecological metadata of sampling sites. The 16S rRNA gene sequences are collected from the GenBank database, while all other data are collected from the scientific literature. Rock magnetic properties for both uncultivated and cultivated MTB species are also included. In the DMTB database, data are accessible through four main interfaces: Site Sort, Phylo Sort, Oligonucleotides, and Magnetic Properties. References in each entry serve as links to specific pages within public databases. The online comprehensive DMTB will provide a very useful data resource for researchers from various disciplines, e.g., microbiology, rock magnetism and paleomagnetism, biogeomagnetism, magnetic material sciences and others.

  10. Database Description - TMBETA-GENOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us TM...BETA-GENOME Database Description General information of database Database name TMBETA-GENOME Alternative n...Advanced Industrial Science and Technology Contact address Dr. M. MICHAEL GROMIHA Associate Professor Departm...ent of Biotechnology IIT Madras Chennai - 600 036 Tel: +91-44-2257-4138(O) E-mail: http://www.iitm.ac.in/bi...: Eukaryota Taxonomy ID: 2759 Database description TMBETA-GENOME is a database for transmembrane β-barrel pr

  11. World Religion Database

    OpenAIRE

    Dekker, Jennifer

    2009-01-01

    This article reviews the new database released by Brill entitled World Religion Database (WRD). It compares WRD to other religious demography tools available and rates the database on a 5 point scale.

  12. Main-memory database VS Traditional database

    OpenAIRE

    Rehn, Marcus; Sunesson, Emil

    2013-01-01

    There has been a surge of new databases in recent years. Applications today create a higher demand on database performance than ever before. Main-memory databases have come into the market quite recently and they are just now catching a lot of interest from many different directions. Main-memory databases are a type of database that stores all of its data in the primary memory. They provide a big increase in performance to a lot of different applications. This work evaluates the difference in...

  13. Diaretinopathy database –A Gene database for diabetic retinopathy

    OpenAIRE

    Vidhya, Gopalakrishnan; Anusha, Bhaskar

    2014-01-01

    Diabetic retinopathy, is a microvascular complication of diabetes mellitus and is a major cause of adult blindness. Despite advances in diagnosis and treatment the pathogenesis of diabetic retinopathy is not well understood. Results from epidemiological studies of diabetic patients suggest that there are familial predispositions to diabetes and to diabetic retinopathy. Therefore the main purpose of this database is to help both scientists and doctors in studying the candidate genes responsibl...

  14. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  15. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  16. PvTFDB: a Phaseolus vulgaris transcription factors database for expediting functional genomics in legumes

    Science.gov (United States)

    Bhawna; Bonthala, V.S.; Gajula, MNV Prasad

    2016-01-01

    The common bean [Phaseolus vulgaris (L.)] is one of the essential proteinaceous vegetables grown in developing countries. However, its production is challenged by low yields caused by numerous biotic and abiotic stress conditions. Regulatory transcription factors (TFs) symbolize a key component of the genome and are the most significant targets for producing stress tolerant crop and hence functional genomic studies of these TFs are important. Therefore, here we have constructed a web-accessible TFs database for P. vulgaris, called PvTFDB, which contains 2370 putative TF gene models in 49 TF families. This database provides a comprehensive information for each of the identified TF that includes sequence data, functional annotation, SSRs with their primer sets, protein physical properties, chromosomal location, phylogeny, tissue-specific gene expression data, orthologues, cis-regulatory elements and gene ontology (GO) assignment. Altogether, this information would be used in expediting the functional genomic studies of a specific TF(s) of interest. The objectives of this database are to understand functional genomics study of common bean TFs and recognize the regulatory mechanisms underlying various stress responses to ease breeding strategy for variety production through a couple of search interfaces including gene ID, functional annotation and browsing interfaces including by family and by chromosome. This database will also serve as a promising central repository for researchers as well as breeders who are working towards crop improvement of legume crops. In addition, this database provide the user unrestricted public access and the user can download entire data present in the database freely. Database URL: http://www.multiomics.in/PvTFDB/ PMID:27465131

  17. PvTFDB: a Phaseolus vulgaris transcription factors database for expediting functional genomics in legumes.

    Science.gov (United States)

    Bhawna; Bonthala, V S; Gajula, Mnv Prasad

    2016-01-01

    The common bean [Phaseolus vulgaris (L.)] is one of the essential proteinaceous vegetables grown in developing countries. However, its production is challenged by low yields caused by numerous biotic and abiotic stress conditions. Regulatory transcription factors (TFs) symbolize a key component of the genome and are the most significant targets for producing stress tolerant crop and hence functional genomic studies of these TFs are important. Therefore, here we have constructed a web-accessible TFs database for P. vulgaris, called PvTFDB, which contains 2370 putative TF gene models in 49 TF families. This database provides a comprehensive information for each of the identified TF that includes sequence data, functional annotation, SSRs with their primer sets, protein physical properties, chromosomal location, phylogeny, tissue-specific gene expression data, orthologues, cis-regulatory elements and gene ontology (GO) assignment. Altogether, this information would be used in expediting the functional genomic studies of a specific TF(s) of interest. The objectives of this database are to understand functional genomics study of common bean TFs and recognize the regulatory mechanisms underlying various stress responses to ease breeding strategy for variety production through a couple of search interfaces including gene ID, functional annotation and browsing interfaces including by family and by chromosome. This database will also serve as a promising central repository for researchers as well as breeders who are working towards crop improvement of legume crops. In addition, this database provide the user unrestricted public access and the user can download entire data present in the database freely.Database URL: http://www.multiomics.in/PvTFDB/. PMID:27465131

  18. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  19. LDL (Landscape Digital Library) a Digital Photographic Database of a Case Study Area in the River Po Valley, Northern Italy

    CERN Document Server

    Papotti, D

    2001-01-01

    Landscapes are both a synthesis and an expression of national, regional and local cultural heritages. It is therefore very important to develop techniques aimed at cataloguing and archiving their forms. This paper discusses the LDL (Landscape Digital Library) project, a Web accessible database that can present the landscapes of a territory with documentary evidence in a new format and from a new perspective. The method was tested in a case study area of the river Po valley (Northern Italy). The LDL is based on a collection of photographs taken following a systematic grid of survey points identified through topographic cartography; the camera level is that of the human eye. This methodology leads to an innovative landscape archive that differs from surveys carried out through aerial photographs or campaigns aimed at selecting "relevant" points of interest. Further developments and possible uses of the LDL are also discussed.

  20. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  1. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  2. AN ENCRYPTION ALGORITHM FOR IMPROVING DATABASE SECURITY USING ROT & REA

    Directory of Open Access Journals (Sweden)

    M. Sujitha

    2015-06-01

    Full Text Available Database is an organized collection of data, many user wants to store their personal and confidential data’s in such database. Unauthorized persons may try to get the data’s from database and misuse them without the owner’s knowledge. To overcome such problem the advanced control mechanism, known as Database security was introduced. Encryption algorithm is one of the way to give protection to the database from various threat or hackers who target to get confidential information. This paper discuss about the proposed encryption algorithm to give security to such database.

  3. A structure to support queries in object-oriented database using fuzzy XML tags

    OpenAIRE

    Bahareh Fahimian; Ali Harounabadi

    2014-01-01

    Every day, we deal with the information, which are under uncertainty and managing this type of information with classical database systems brings a disadvantageous loss of data semantics along. Therefore, advanced database modeling techniques are necessary. Entrance of object orienting concept in databases helps relational database gradually use object oriented database in various fields. With the advances in this field, simple objects alone are not able to include complex data types and comp...

  4. Searching and Indexing Genomic Databases via Kernelization

    Directory of Open Access Journals (Sweden)

    Travis eGagie

    2015-02-01

    Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.

  5. Searching and Indexing Genomic Databases via Kernelization

    Science.gov (United States)

    Gagie, Travis; Puglisi, Simon J.

    2015-01-01

    The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001

  6. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  7. Directory of IAEA databases

    International Nuclear Information System (INIS)

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  8. Assessment Database (ADB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Assessment Database (ADB) is a relational database application for tracking water quality assessment data, including use attainment, and causes and sources of...

  9. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  10. AIDSinfo Drug Database

    Science.gov (United States)

    ... Widgets Order Publications Skip Nav AIDS info Drug Database Home > Drugs Español small medium large Text Size ... health care providers and patients. Search the Drug Database Help × Search by drug name Performs a search ...

  11. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  12. DMPD: Translational mini-review series on Toll-like receptors: recent advances inunderstanding the role of Toll-like receptors in anti-viral immunity. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 17223961 Translational mini-review series on Toll-like receptors: recent advances i...147(2):217-26. (.png) (.svg) (.html) (.csml) Show Translational mini-review series on Toll-like receptors: recent advances...nity. PubmedID 17223961 Title Translational mini-review series on Toll-like receptors: recent advances inund

  13. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Database Description General information of database Database name Trypanosomes Database...rmation and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonomy Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Na...me: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database is a database providing th

  14. Aviation Safety Issues Database

    Science.gov (United States)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  15. Web database development

    OpenAIRE

    Tsardas, Nikolaos A.

    2001-01-01

    This thesis explores the concept of Web Database Development using Active Server Pages (ASP) and Java Server Pages (JSP). These are among the leading technologies in the web database development. The focus of this thesis was to analyze and compare the ASP and JSP technologies, exposing their capabilities, limitations, and differences between them. Specifically, issues related to back-end connectivity using Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC), application ar...

  16. IP Geolocation Databases: Unreliable?

    OpenAIRE

    Poese, Ingmar; Uhlig, Steve; Kaafar, Mohamed Ali; Donnet, Benoît; Gueye, Bamba

    2011-01-01

    The most widely used technique for IP geolocation con- sists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolo- cation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitation...

  17. Refactoring of a Database

    OpenAIRE

    Dsousa, Ayeesha; Bhatia, Shalini

    2009-01-01

    The technique of database refactoring is all about applying disciplined and controlled techniques to change an existing database schema. The problem is to successfully create a Database Refactoring Framework for databases. This paper concentrates on the feasibility of adapting this concept to work as a generic template. To retain the constraints regardless of the modifications to the metadata, the paper proposes a MetaData Manipulation Tool to facilitate change. The tool adopts a Template Des...

  18. Scopus database: a review

    OpenAIRE

    Burnham, Judy F.

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  19. Future database machine architectures

    OpenAIRE

    Hsiao, David K.

    1984-01-01

    There are many software database management systems available on many general-purpose computers ranging from micros to super-mainframes. Database machines as backened computers can offload the database management work from the mainframe so that we can retain the same mainframe longer. However, the database backend must also demonstrate lower cost, higher performance, and newer functionality. Some of the fundamental architecture issues in the design of high-performance and great-capacity datab...

  20. Library-Generated Databases

    OpenAIRE

    Brattli, Tore

    1999-01-01

    The development of the Internet and the World Wide Web has given libraries many new opportunities to disseminate organized information about internal and external collections to users. One of these possibilities is to make separate databases for information or services not sufficiently covered by the online public access catalog (OPAC) or other available databases. What’s new is that librarians can now create and maintain these databases and make them user-friendly. Library-generated database...

  1. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    Directory of Open Access Journals (Sweden)

    Elizabeth M Azzato

    2014-01-01

    Full Text Available Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS. We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS.

  2. Automated Oracle database testing

    CERN Document Server

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  3. Mission and Assets Database

    Science.gov (United States)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  4. Multidimensional Databases and Data Warehousing

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian

    The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...... implementation techniques that are particularly important to multidimensional databases, including materialized views, bitmap indices, join indices, and star join processing. The book ends with a chapter that presents the literature on which the book is based and offers further readings for those readers who...... data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases. The book also covers advanced multidimensional concepts that are considered to...

  5. CTD_DATABASE - Cascadia tsunami deposit database

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have...

  6. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  7. Nuclear power economic database

    International Nuclear Information System (INIS)

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  8. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  9. Recent advancements on the development of web-based applications for the implementation of seismic analysis and surveillance systems

    Science.gov (United States)

    Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.

    2014-12-01

    Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in Java

  10. Multidimensional Databases and Data Warehousing

    CERN Document Server

    Jensen, Christian

    2010-01-01

    The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases.The book also covers advanced multidimensional concepts that are considered to b

  11. Multidimensional Databases and Data Warehousing

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian

    The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...... data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases. The book also covers advanced multidimensional concepts that are considered to...

  12. Preliminary risk assessment database and risk ranking of pharmaceuticals in the environment

    International Nuclear Information System (INIS)

    There is increasing concern about pharmaceuticals entering surface waters and the impacts these compounds may have on aquatic organisms. Many contaminants, including pharmaceuticals, are not completely removed by wastewater treatment. Discharge of effluent into surface waters results in chronic low-concentration exposure of aquatic organisms to these compounds, with unknown impacts. Exposure of virulent bacteria in wastewater to antibiotic residues may also induce resistance, which could threaten human health. The purpose of this study was to provide information on pharmaceutical threats to the environment. A preliminary risk assessment database for common pharmaceuticals was created and put into a web-accessible database named 'Pharmaceuticals in the Environment, Information for Assessing Risk' (PEIAR) to help others evaluate potential risks of pharmaceutical contaminants in the environment. Information from PEIAR was used to prioritize compounds that may threaten the environment, with a focus on marine and estuarine environments. The pharmaceuticals were ranked using five different combinations of physical-chemical and toxicological data, which emphasized different risks. The results of the ranking methods differed in the compounds identified as high risk; however, drugs from the central nervous system, cardiovascular, and anti-infective classes were heavily represented within the top 100 drugs in all rankings. Anti-infectives may pose the greatest overall risk based upon our results using a combination of factors that measure environmental transport, fate, and aquatic toxicity. The dataset is also useful for highlighting information that is still needed to assuredly assess risk

  13. Preliminary risk assessment database and risk ranking of pharmaceuticals in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Emily R. [Center for Coastal Environmental Health and Biomolecular Research, National Ocean Service, NOAA, 219 Fort Johnson Road, Charleston, SC 29412-9110 (United States)], E-mail: emily.cooper@noaa.gov; Siewicki, Thomas C.; Phillips, Karl [Center for Coastal Environmental Health and Biomolecular Research, National Ocean Service, NOAA, 219 Fort Johnson Road, Charleston, SC 29412-9110 (United States)

    2008-07-15

    There is increasing concern about pharmaceuticals entering surface waters and the impacts these compounds may have on aquatic organisms. Many contaminants, including pharmaceuticals, are not completely removed by wastewater treatment. Discharge of effluent into surface waters results in chronic low-concentration exposure of aquatic organisms to these compounds, with unknown impacts. Exposure of virulent bacteria in wastewater to antibiotic residues may also induce resistance, which could threaten human health. The purpose of this study was to provide information on pharmaceutical threats to the environment. A preliminary risk assessment database for common pharmaceuticals was created and put into a web-accessible database named 'Pharmaceuticals in the Environment, Information for Assessing Risk' (PEIAR) to help others evaluate potential risks of pharmaceutical contaminants in the environment. Information from PEIAR was used to prioritize compounds that may threaten the environment, with a focus on marine and estuarine environments. The pharmaceuticals were ranked using five different combinations of physical-chemical and toxicological data, which emphasized different risks. The results of the ranking methods differed in the compounds identified as high risk; however, drugs from the central nervous system, cardiovascular, and anti-infective classes were heavily represented within the top 100 drugs in all rankings. Anti-infectives may pose the greatest overall risk based upon our results using a combination of factors that measure environmental transport, fate, and aquatic toxicity. The dataset is also useful for highlighting information that is still needed to assuredly assess risk.

  14. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RMG Database... Description General information of database Database name RMG Alternative name Rice Mitochondri...ational Institute of Agrobiological Sciences E-mail : Database classification Nucleotide Sequence Databases ...Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database description This database co...e of rice mitochondrial genome and information on the analysis results. Features and manner of utilization of database

  15. Bioinformatics glossary based Database of Biological Databases: DBD

    OpenAIRE

    Siva Kiran RR; Setty MVN; Hanumatha Rao G

    2009-01-01

    Database of Biological/Bioinformatics Databases (DBD) is a collection of 1669 databases and online resources collected from NAR Database Summary Papers (http://www.oxfordjournals.org/nar/database/a/) & Internet search engines. The database has been developed based on 437 keywords (Glossary) available in http://falcon.roswellpark.org/labweb/glossary.html. Keywords with their relevant databases are arranged in alphabetic order which enables quick accession of databases by researchers. Dat...

  16. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  17. Genome Statute and Legislation Database

    Science.gov (United States)

    ... Database Welcome to the Genome Statute and Legislation Database The Genome Statute and Legislation Database is comprised ... the National Society of Genetic Counselors . Search the Database Search Tips You may select one or more ...

  18. The European Declarative System, Database, and Languages

    OpenAIRE

    Haworth, Guy McCrossan; Leunig, Steve; Hammer, Carsten; Reeve, Mike

    1990-01-01

    The EP2025 EDS project develops a highly parallel information server that supports established high-value interfaces. We describe the motivation for the project, the architecture of the system, and the design and application of its database and language subsystems. The Elipsys logic programming language, its advanced applications, EDS Lisp, and the Metal machine translation system are examined.

  19. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  20. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  1. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  2. Database management systems

    CERN Document Server

    Pallaw, Vijay Krishna

    2010-01-01

    The text covers the fundamental concept and a complete guide to the prac- tical implementation of Database Management Systems. Concepts includes SQL, PL/SQL. These concepts include aspects of Database design, Data- base Languages, and Database System implementation. The entire book is divided into five units to ensure the smooth flow of the subject. The extra methodology makes it very useful for students as well as teachers.

  3. An organic database system

    OpenAIRE

    Kersten, Martin; Siebes, Arno

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, ground breaking results are obtained in Quantum- and DNA-computing using nature as inspiration for its computational models. This paper provides a vision on a new brand of database architectures, i.e....

  4. Database Application Schema Forensics

    OpenAIRE

    Hector Quintus Beyers; Olivier, Martin S; Hancke, Gerhard P.

    2014-01-01

    The application schema layer of a Database Management System (DBMS) can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic ...

  5. Web Technologies And Databases

    OpenAIRE

    Irina-Nicoleta Odoraba

    2011-01-01

    The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS) - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server. Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications...

  6. Fingerprint databases for theorems

    OpenAIRE

    Billey, Sara C.; Tenner, Bridget E.

    2013-01-01

    We discuss the advantages of searchable, collaborative, language-independent databases of mathematical results, indexed by "fingerprints" of small and canonical data. Our motivating example is Neil Sloane's massively influential On-Line Encyclopedia of Integer Sequences. We hope to encourage the greater mathematical community to search for the appropriate fingerprints within each discipline, and to compile fingerprint databases of results wherever possible. The benefits of these databases are...

  7. Categorical Database Generalization

    Institute of Scientific and Technical Information of China (English)

    LIU Yaolin; Martin Molenaar; AI Tinghua; LIU Yanfang

    2003-01-01

    This paper focuses on the issues of categorical database gen-eralization and emphasizes the roles ofsupporting data model, integrated datamodel, spatial analysis and semanticanalysis in database generalization.The framework contents of categoricaldatabase generalization transformationare defined. This paper presents an in-tegrated spatial supporting data struc-ture, a semantic supporting model andsimilarity model for the categorical da-tabase generalization. The concept oftransformation unit is proposed in generalization.

  8. Nuclear Science References Database

    OpenAIRE

    PRITYCHENKO B.; Běták, E.; B. Singh; Totans, J.

    2013-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...

  9. Searching Databases with Keywords

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Kun-Long Zhang

    2005-01-01

    Traditionally, SQL query language is used to search the data in databases. However, it is inappropriate for end-users, since it is complex and hard to learn. It is the need of end-user, searching in databases with keywords, like in web search engines. This paper presents a survey of work on keyword search in databases. It also includes a brief introduction to the SEEKER system which has been developed.

  10. On Intelligent Database Systems

    OpenAIRE

    Dennis McLeod; Paul Yanover

    1992-01-01

    In response to the limitations of comtemporary database management systems in addressing the requirements of many potential application environments, and in view of the characteristics of emerging interconnected systems, we examine research directions involving adding more ‘intelligence’ to database systems. Three major thrusts in the intelligent database systems area are discussed. The first involves increasing the modeling power to represent an application environment. The second emphasis c...

  11. Representations built from a true geographic database

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    The development of a system for geovisualisation under the Centre for 3D GeoInformation at Aalborg University, Denmark, has exposed the need for a rethinking of the representation of virtual environments. Now that almost everything is possible (due to technological advances in computer graphics...... a representation based on geographic and geospatial principles. The system GRIFINOR, developed at 3DGI, Aalborg University, DK, is capable of creating this object-orientation and furthermore does this on top of a true Geographic database. A true Geographic database can be characterized as a database...... that can cover the whole world in 3d and with a spatial reference given by geographic coordinates. Built on top of this is a customised viewer, based on the Xith(Java) scenegraph. The viewer reads the objects directly from the database and solves the question about Level-Of-Detail on buildings...

  12. Indexing and Searching a Mass Spectrometry Database

    Science.gov (United States)

    Besenbacher, Søren; Schwikowski, Benno; Stoye, Jens

    Database preprocessing in order to create an index often permits considerable speedup in search compared to the iterated query of an unprocessed database. In this paper we apply index-based database lookup to a range search problem that arises in mass spectrometry-based proteomics: given a large collection of sparse integer sets and a sparse query set, find all the sets from the collection that have at least k integers in common with the query set. This problem arises when searching for a mass spectrum in a database of theoretical mass spectra using the shared peaks count as similarity measure. The algorithms can easily be modified to use the more advanced shared peaks intensity measure instead of the shared peaks count. We introduce three different algorithms solving these problems. We conclude by presenting some experiments using the algorithms on realistic data showing the advantages and disadvantages of the algorithms.

  13. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  14. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  15. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.;

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  16. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  17. Stanford Rock Physics database

    Energy Technology Data Exchange (ETDEWEB)

    Nolen-Hoeksema, R. (Stanford Univ., CA (United States)); Hart, C. (Envision Systems, Inc., Fremont, CA (United States))

    The authors have developed a relational database for the Stanford Rock Physics (SRP) Laboratory. The database is a flexible tool for helping researchers find relevant data. It significantly speeds retrieval of data and facilitates new organizations of rock physics information to get answers to research questions. The motivation for a database was to have a computer data storage, search, and display capability to explore the sensitivity of acoustic velocities to changes in the properties and states of rocks. Benefits include data exchange among researchers, discovery of new relations in existing data, and identification of new areas of research. The authors' goal was to build a database flexible enough for the dynamic and multidisciplinary research environment of rock physics. Databases are based on data models. A flexible data model must: (1) Not impose strong, prior constraints on the data; (2) not require a steep learning curve of the database architecture; and (3) be easy to modify. The authors' choice of the relational data model reflects these considerations. The database and some hardware and software considerations were influenced by their choice of data model, and their desire to provide a user-friendly interface for the database and build a distributed database system.

  18. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  19. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  20. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  1. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census...

  2. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  3. Tomato genomic resources database: an integrated repository of useful tomato genomic information for basic and applied research.

    Directory of Open Access Journals (Sweden)

    B Venkata Suresh

    Full Text Available Tomato Genomic Resources Database (TGRD allows interactive browsing of tomato genes, micro RNAs, simple sequence repeats (SSRs, important quantitative trait loci and Tomato-EXPEN 2000 genetic map altogether or separately along twelve chromosomes of tomato in a single window. The database is created using sequence of the cultivar Heinz 1706. High quality single nucleotide polymorphic (SNP sites between the genes of Heinz 1706 and the wild tomato S. pimpinellifolium LA1589 are also included. Genes are classified into different families. 5'-upstream sequences (5'-US of all the genes and their tissue-specific expression profiles are provided. Sequences of the microRNA loci and their putative target genes are catalogued. Genes and 5'-US show presence of SSRs and SNPs. SSRs located in the genomic, genic and 5'-US can be analysed separately for the presence of any particular motif. Primer sequences for all the SSRs and flanking sequences for all the genic SNPs have been provided. TGRD is a user-friendly web-accessible relational database and uses CMAP viewer for graphical scanning of all the features. Integration and graphical presentation of important genomic information will facilitate better and easier use of tomato genome. TGRD can be accessed as an open source repository at http://59.163.192.91/tomato2/.

  4. EXPLORATION ON SCALABILITY OF DATABASE BULK INSERTION WITH MULTITHREADING

    OpenAIRE

    Boon-Wee Low; Boon-Yaik Ooi; and Chee-Siang Wong

    2011-01-01

    The advancement of database engine and multi-core processors technologies have enable database insertion to be implemented concurrently via multithreading programming. The objective of this work is to evaluate the performance of using multithreading technique to perform database insertion of large data set with known size to enhance the performance of data access layer (DAL) particularly on the bulk-insertion operation. The performance evaluation includes techniques such as using single datab...

  5. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RPSD Database... Description General information of database Database name RPSD Alternative name Summary inform...n National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Database classification Structure Database...idopsis thaliana Taxonomy ID: 3702 Taxonomy Name: Glycine max Taxonomy ID: 3847 Database description We have...nts such as rice, and have put together the result and related informations. This database contains the basi

  6. Nomenclature and databases - The past, the present, and the future

    NARCIS (Netherlands)

    Jacobs, Jeffrey Phillip; Mavroudis, Constantine; Jacobs, Marshall Lewis; Maruszewski, Bohdan; Tchervenkov, Christo I.; Lacour-Gayet, Francois G.; Clarke, David Robinson; Gaynor, J. William; Spray, Thomas L.; Kurosawa, Hiromi; Stellin, Giovanni; Ebels, Tjark; Bacha, Emile A.; Walters, Henry L.; Elliott, Martin J.

    2007-01-01

    This review discusses the historical aspects, current state of the art, and potential future advances in the areas of nomenclature and databases for congenital heart disease. Five areas will be reviewed: (1) common language = nomenclature, (2) mechanism of data collection (database or registry) with

  7. GRAPH DATABASES AND GRAPH VIZUALIZATION

    OpenAIRE

    Klančar, Jure

    2013-01-01

    The thesis presents graph databases. Graph databases are a part of NoSQL databases, which is why this thesis presents basics of NoSQL databases as well. We have focused on advantages of graph databases compared to rela- tional databases. We have used one of native graph databases (Neo4j), to present more detailed processing of graph databases. To get more acquainted with graph databases and its principles, we developed a simple application that uses a Neo4j graph database to...

  8. Optimizing Database Architecture for the New Bottleneck: Memory Access

    OpenAIRE

    Manegold, Stefan; Boncz, Peter; Kersten, Martin

    2000-01-01

    In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data struct...

  9. Database architecture optimized for the new bottleneck: Memory access

    OpenAIRE

    Boncz, Peter; Manegold, Stefan; Kersten, Martin

    1999-01-01

    In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data struct...

  10. CDS - Database Administrator's Guide

    Science.gov (United States)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  11. Directory of IAEA databases

    International Nuclear Information System (INIS)

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  12. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  13. An organic database system

    NARCIS (Netherlands)

    Kersten, M.L.; Siebes, A.P.J.M.

    1999-01-01

    The pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity. Furthermore, gro

  14. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  15. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  16. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  17. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  18. Neutrosophic Relational Database Decomposition

    OpenAIRE

    Meena Arora; Ranjit Biswas; Dr. U.S.Pandey

    2011-01-01

    In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  19. Protein sequence databases.

    Science.gov (United States)

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium. PMID:15036160

  20. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  1. Study of developing a database of energy statistics

    Energy Technology Data Exchange (ETDEWEB)

    Park, T.S. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    An integrated energy database should be prepared in advance for managing energy statistics comprehensively. However, since much manpower and budget is required for developing an integrated energy database, it is difficult to establish a database within a short period of time. Therefore, this study sets the purpose in drawing methods to analyze existing statistical data lists and to consolidate insufficient data as first stage work for the energy database, and at the same time, in analyzing general concepts and the data structure of the database. I also studied the data content and items of energy databases in operation in international energy-related organizations such as IEA, APEC, Japan, and the USA as overseas cases as well as domestic conditions in energy databases, and the hardware operating systems of Japanese databases. I analyzed the making-out system of Korean energy databases, discussed the KEDB system which is representative of total energy databases, and present design concepts for new energy databases. In addition, I present the establishment directions and their contents of future Korean energy databases, data contents that should be collected by supply and demand statistics, and the establishment of data collection organization, etc. by analyzing the Korean energy statistical data and comparing them with the system of OECD/IEA. 26 refs., 15 figs., 11 tabs.

  2. Hazard Analysis Database Report

    International Nuclear Information System (INIS)

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification

  3. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  4. Hellenic Woodland Database

    OpenAIRE

    Fotiadis, Georgios; Tsiripidis, Ioannis; Bergmeier, Erwin; Dimopolous, Panayotis

    2012-01-01

    The Hellenic Woodland Database (GIVD ID EU-GR-006) includes relevés from 59 sources, approximately, as well as unpublished relevés. In total 4,571 relevés have already been entered in the database, but the database is going to continue growing in the near future. Species abundances are recorded according the 7-grade Braun-Blanquet scale. The oldest relevés date back to 1963. For the majority of relevés (more than 90%) environmental data (e.g. altitude, slope aspect, inclination) exis...

  5. MySQL Database

    OpenAIRE

    Jimoh, Morufu

    2010-01-01

    The main objectives of this thesis were to show how it is much easier and faster to find required information from computer database than from other data storage systems or old fashioned way. We will be able to add, retrieve and update data in a computer database easily. Using computer database for company to keep the information for their customer is the objective of my thesis. It is faster which makes it economically a better solution. The project has six tables which are branch, st...

  6. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  7. Unit 43 - Database Concepts I

    OpenAIRE

    Unit 61, CC in GIS; White, Gerald (ACER)

    1990-01-01

    This unit outlines fundamental concepts in database systems and their integration with GIS, including advantages of a database approach, views of a database, database management systems (DBMS), and alternative database models. Three models—hierarchical, network and relational—are discussed in greater detail.

  8. Some operations on database universes

    OpenAIRE

    Brock, E.O. de

    1997-01-01

    Operations such as integration or modularization of databases can be considered as operations on database universes. This paper describes some operations on database universes. Formally, a database universe is a special kind of table. It turns out that various operations on tables constitute interesting operations on database universes as well.

  9. The LOCUS interface to the MFE database

    International Nuclear Information System (INIS)

    The MFE database now consists of over 900 shots from TFTR, PDX, PLT, T-10, JT-60, TEXT, JET and ASDEX. A variety of discharge conditions is represented, ranging from single time slice Ohmic discharges to multiple time-slice auxiliary heated discharges. Included with most datasets is a reference that describes the experiment being performed when the data was taken. The MFE database is currently implemented under INGRES on a VAX that is on Internet. LOCUS, a database utility, developed at the Princeton Plasma Physics Laboratory is now available as an interface to the database. The LOCUS front end provides a graphic interface to the database from any generic graphics terminal that supports Tektronix 4010 emulation. It provides a variety of procedures for extracting, manipulating and graphing data from the MFE database. In order to demonstrate the capabilities of the LOCUS interface, the authors examine, in detail, one of the recently added JET, H-mode discharges. In this example, they address some new concepts such as monitor functions, which have been introduced in order to help users more fully understand the multiple time-slice datasets. They also describe some of the more advanced techniques available in LOCUS for data access and manipulation. Specific areas of interest that are discussed are searching for and retrieving datasets, graphics, data fitting, and linear regression analysis

  10. Status Report for Remediation Decision Support Project, Task 1, Activity 1.B – Physical and Hydraulic Properties Database and Interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.

    2008-09-26

    The objective of Activity 1.B of the Remediation Decision Support (RDS) Project is to compile all available physical and hydraulic property data for sediments from the Hanford Site, to port these data into the Hanford Environmental Information System (HEIS), and to make the data web-accessible to anyone on the Hanford Local Area Network via the so-called Virtual Library. In past years efforts were made by RDS project staff to compile all available physical and hydraulic property data for Hanford sediments and to transfer these data into SoilVision{reg_sign}, a commercial geotechnical software package designed for storing, analyzing, and manipulating soils data. Although SoilVision{reg_sign} has proven to be useful, its access and use restrictions have been recognized as a limitation to the effective use of the physical and hydraulic property databases by the broader group of potential users involved in Hanford waste site issues. In order to make these data more widely available and useable, a decision was made to port them to HEIS and to make them web-accessible via a Virtual Library module. In FY08 the objectives of Activity 1.B of the RDS Project were to: (1) ensure traceability and defensibility of all physical and hydraulic property data currently residing in the SoilVision{reg_sign} database maintained by PNNL, (2) transfer the physical and hydraulic property data from the Microsoft Access database files used by SoilVision{reg_sign} into HEIS, which has most recently been maintained by Fluor-Hanford, Inc., (3) develop a Virtual Library module for accessing these data from HEIS, and (4) write a User's Manual for the Virtual Library module. The development of the Virtual Library module was to be performed by a third party under subcontract to Fluor. The intent of these activities is to make the available physical and hydraulic property data more readily accessible and useable by technical staff and operable unit managers involved in waste site assessments

  11. Navigating public microarray databases.

    Science.gov (United States)

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources. PMID:18629145

  12. Dietary Supplement Label Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The database is designed to help both the general public and health care providers find information about ingredients in brand-name products, including name, form,...

  13. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  14. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  15. Nuclear Science References Database

    International Nuclear Information System (INIS)

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  16. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  17. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  18. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  19. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication....

  20. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  1. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states...

  2. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  3. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  4. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  5. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information...

  6. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  7. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  8. Eldercare Locator Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Eldercare Locator is a searchable database that allows a user to search via zip code or city/ state for agencies at the State and local levels that provide...

  9. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  10. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  11. Disaster Debris Recovery Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 3,500 composting facilities, demolition contractors, haulers, transfer...

  12. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  13. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  14. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær;

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200...... women in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010...

  15. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment...

  16. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  17. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  18. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  19. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  20. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias; Papaeconomou, Eirini; Gani, Rafiqul

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The...... database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups in...... the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  1. ORACLE DATABASE SECURITY

    OpenAIRE

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  2. Fashion Information Database

    Institute of Scientific and Technical Information of China (English)

    LI Jun; WU Hai-yan; WANG Yun-yi

    2002-01-01

    In the field of fashion industry, it is a bottleneck of how to control and apply the information in the procedure of fashion merchandising. By the aid of digital technology,a perfect and practical fashion information database could be established so that high- quality and efficient,low-cost and characteristic fashion merchandising system could be realized. The basic structure of fashion information database is discussed.

  3. Database computing in HEP

    International Nuclear Information System (INIS)

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  4. Taxes in Europe Database

    OpenAIRE

    European Commission DG Taxation and Customs Union

    2009-01-01

    The Taxes in Europe database is the European Commission's on-line information tool covering the main taxes in force in the EU Member States. Access is free for all users. The system contains information on around 650 taxes, as provided to the European Commission by the national authorities. The "Taxes in Europe" database contains, for each individual tax, information on its legal basis, assessment base, main exemptions, applicable rate(s), economic and statistical classification, as well as t...

  5. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik;

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  6. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Yeast Interacting Proteins Database Update History of This Database Date Update contents 2010/03/29 Yeast In...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History

  7. Cloud Databases: A Paradigm Shift in Databases

    OpenAIRE

    Indu Arora; Anu Gupta

    2012-01-01

    Relational databases ruled the Information Technology (IT) industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of Wor...

  8. GeneLink: a database to facilitate genetic studies of complex traits

    Directory of Open Access Journals (Sweden)

    Wolfsberg Tyra G

    2004-10-01

    Full Text Available Abstract Background In contrast to gene-mapping studies of simple Mendelian disorders, genetic analyses of complex traits are far more challenging, and high quality data management systems are often critical to the success of these projects. To minimize the difficulties inherent in complex trait studies, we have developed GeneLink, a Web-accessible, password-protected Sybase database. Results GeneLink is a powerful tool for complex trait mapping, enabling genotypic data to be easily merged with pedigree and extensive phenotypic data. Specifically designed to facilitate large-scale (multi-center genetic linkage or association studies, GeneLink securely and efficiently handles large amounts of data and provides additional features to facilitate data analysis by existing software packages and quality control. These include the ability to download chromosome-specific data files containing marker data in map order in various formats appropriate for downstream analyses (e.g., GAS and LINKAGE. Furthermore, an unlimited number of phenotypes (either qualitative or quantitative can be stored and analyzed. Finally, GeneLink generates several quality assurance reports, including genotyping success rates of specified DNA samples or success and heterozygosity rates for specified markers. Conclusions GeneLink has already proven an invaluable tool for complex trait mapping studies and is discussed primarily in the context of our large, multi-center study of hereditary prostate cancer (HPC. GeneLink is freely available at http://research.nhgri.nih.gov/genelink.

  9. A Secure Database Encryption Scheme

    OpenAIRE

    Zongkai Yang; Samba Sesay; Jingwen Chen; Du Xu

    2004-01-01

    The need to protect database, would be an every growing one especially so in this age of e-commerce. Many conventional database security systems are bugged with holes that can be used by attackers to penetrate the database. No matter what degree of security is put in place, sensitive data in database are still vulnerable to attack. To avoid the risk posed by this threat, database encryption has been recommended. However encrypting all of database item will greatly degrade ...

  10. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  11. DCC Briefing Paper: Database archiving

    OpenAIRE

    Müller, Heiko

    2009-01-01

    In a computational context, data archiving refers to the storage of electronic documents, data sets, multimedia files, and so on, for a defined period of time. Database archiving is usually seen as a subset of data archiving. Database archiving focuses on archiving data that are maintained under the control of a database management system and structured under a database schema, e.g., a relational database. The primary goal of database archiving is to maintain access to data in case it is late...

  12. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  13. The Jungle Database Search Engine

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Dyreson, Curtis

    1999-01-01

    Information spread in in databases cannot be found by current search engines. A database search engine is capable to access and advertise database on the WWW. Jungle is a database search engine prototype developed at Aalborg University. Operating through JDBC connections to remote databases, Jungle...... extracts and indexes database data and meta-data, building a data store of database information. This information is used to evaluate and optimize queries in the AQUA query language. AQUA is a natural and intuitive database query language that helps users to search for information without knowing how...

  14. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell;

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval, and...... dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  15. A database of worldwide glacier thickness observations

    DEFF Research Database (Denmark)

    Gärtner-Roer, I.; Naegeli, K.; Huss, M.;

    2014-01-01

    One of the grand challenges in glacier research is to assess the total ice volume and its global distribution. Over the past few decades the compilation of a world glacier inventory has been well-advanced both in institutional set-up and in spatial coverage. The inventory is restricted to glacier...... surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, we present a standardized database of glacier thickness observations compiled by an extensive literature...... review and from airborne data extracted from NASA's Operation IceBridge. This database contains ice thickness observations from roughly 1100 glaciers and ice caps including 550 glacier-wide estimates and 750,000 point observations. A comparison of these observational ice thicknesses with results from...

  16. Chemical Explosion Database

    Science.gov (United States)

    Johansson, Peder; Brachet, Nicolas

    2010-05-01

    A database containing information on chemical explosions, recorded and located by the International Data Center (IDC) of the CTBTO, should be established in the IDC prior to entry into force of the CTBT. Nearly all of the large chemical explosions occur in connection with mining activity. As a first step towards the establishment of this database, a survey of presumed mining areas where sufficiently large explosions are conducted has been done. This is dominated by the large coal mining areas like the Powder River (U.S.), Kuznetsk (Russia), Bowen (Australia) and Ekibastuz (Kazakhstan) basins. There are also several other smaller mining areas, in e.g. Scandinavia, Poland, Kazakhstan and Australia, with large enough explosions for detection. Events in the Reviewed Event Bulletin (REB) of the IDC that are located in or close to these mining areas, and which therefore are candidates for inclusion in the database, have been investigated. Comparison with a database of infrasound events has been done as many mining blasts generate strong infrasound signals and therefore also are included in the infrasound database. Currently there are 66 such REB events in 18 mining areas in the infrasound database. On a yearly basis several hundreds of events in mining areas have been recorded and included in the REB. Establishment of the database of chemical explosions requires confirmation and ground truth information from the States Parties regarding these events. For an explosion reported in the REB, the appropriate authority in whose country the explosion occurred is encouraged, on a voluntary basis, to seek out information on the explosion and communicate this information to the IDC.

  17. Knowledge Discovery in Databases for Competitive Advantage

    OpenAIRE

    Mark Gilchrist; Deana Lehmann Mooers; Glenn Skrubbeltrang; Francine Vachon

    2012-01-01

    In today¡¯s increasingly competitive business world, organizations are using ICT to advance their business strategies and increase their competitive advantage. One technological element that is growing in popularity is knowledge discovery in databases (KDD). In this paper, we propose an analytic framework which is applied to two cases concerning KDD. The first case presents an organization at the analysis stage of a KDD project. The second one shows how a multinational company leverages its d...

  18. The Chandra Bibliography Database

    Science.gov (United States)

    Rots, A. H.; Winkelman, S. L.; Paltani, S.; Blecksmith, S. E.; Bright, J. D.

    2004-07-01

    Early in the mission, the Chandra Data Archive started the development of a bibliography database, tracking publications in refereed journals and on-line conference proceedings that are based on Chandra observations, allowing our users to link directly to articles in the ADS from our archive, and to link to the relevant data in the archive from the ADS entries. Subsequently, we have been working closely with the ADS and other data centers, in the context of the ADEC-ITWG, on standardizing the literature-data linking. We have also extended our bibliography database to include all Chandra-related articles and we are also keeping track of the number of citations of each paper. Obviously, in addition to providing valuable services to our users, this database allows us to extract a wide variety of statistical information. The project comprises five components: the bibliography database-proper, a maintenance database, an interactive maintenance tool, a user browsing interface, and a web services component for exchanging information with the ADS. All of these elements are nearly mission-independent and we intend make the package as a whole available for use by other data centers. The capabilities thus provided represent support for an essential component of the Virtual Observatory.

  19. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  20. Integrating Paleoecological Databases

    Science.gov (United States)

    Blois, Jessica; Goring, Simon; Smith, Alison

    2011-02-01

    Neotoma Consortium Workshop; Madison, Wisconsin, 23-26 September 2010 ; Paleoecology can contribute much to global change science, as paleontological records provide rich information about species range shifts, changes in vegetation composition and productivity, aquatic and terrestrial ecosystem responses to abrupt climate change, and paleoclimate reconstruction, for example. However, while paleoecology is increasingly a multidisciplinary, multiproxy field focused on biotic responses to global change, most paleo databases focus on single-proxy groups. The Neotoma Paleoecology Database (http://www.neotomadb.org) aims to remedy this limitation by integrating discipline-specific databases to facilitate cross-community queries and analyses. In September, Neotoma consortium members and representatives from other databases and data communities met at the University of Wisconsin-Madison to launch the second development phase of Neotoma. The workshop brought together 54 international specialists, including Neotoma data stewards, users, and developers. Goals for the meeting were fourfold: (1) develop working plans for existing data communities; (2) identify new data types and sources; (3) enhance data access, visualization, and analysis on the Neotoma Web site; and (4) coordinate with other databases and cooperate in tool development and sharing.

  1. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  2. Human Performance Event Database

    International Nuclear Information System (INIS)

    The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)

  3. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us GETDB Database... Description General information of database Database name GETDB Alternative name Gal4 Enhancer Trap Insertion Database... +81-78-306-3183 E-mail: Database classification Expression Invertebrate genome database Organism Taxonomy N...ame: Drosophila melanogaster Taxonomy ID: 7227 Database description About 4,600 i...nsertion lines of enhancer trap lines based on the Gal4-UAS method were generated in Drosophila, and all of

  4. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  5. Web Technologies And Databases

    Directory of Open Access Journals (Sweden)

    Irina-Nicoleta Odoraba

    2011-04-01

    Full Text Available The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server.Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications need high performance DBMS's able to run multiple applications simultaneously. Hyper Text Markup Language (HTML is used to create hypertext documents for web pages. The purpose of HTML is rather the presentation of information – paragraphs, fonts, tables,than semantics description document.

  6. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina;

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated with...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  7. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based......Many of today’s farming systems are composed of purpose-built computerized farming devices such as spraying equipments, harvesters, fertilizer spreaders and so on. These devices produce large amounts of data. In most of the cases, it is essential to store data for longer time periods for analysis...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  8. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  9. Cytochrome P450 database.

    Science.gov (United States)

    Lisitsa, A V; Gusev, S A; Karuzina, I I; Archakov, A I; Koymans, L

    2001-01-01

    This paper describes a specialized database dedicated exclusively to the cytochrome P450 superfamily. The system provides the impression of superfamily's nomenclature and describes structure and function of different P450 enzymes. Information on P450-catalyzed reactions, substrate preferences, peculiarities of induction and inhibition is available through the database management system. Also the source genes and appropriate translated proteins can be retrieved together with corresponding literature references. Developed programming solution provides the flexible interface for browsing, searching, grouping and reporting the information. Local version of database manager and required data files are distributed on a compact disk. Besides, there is a network version of the software available on Internet. The network version implies the original mechanism, which is useful for the permanent online extension of the data scope. PMID:11769119

  10. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  11. Optimizing Spatial Databases

    Directory of Open Access Journals (Sweden)

    Anda VELICANU

    2010-01-01

    Full Text Available This paper describes the best way to improve the optimization of spatial databases: through spatial indexes. The most commune and utilized spatial indexes are R-tree and Quadtree and they are presented, analyzed and compared in this paper. Also there are given a few examples of queries that run in Oracle Spatial and are being supported by an R-tree spatial index. Spatial databases offer special features that can be very helpful when needing to represent such data. But in terms of storage and time costs, spatial data can require a lot of resources. This is why optimizing the database is one of the most important aspects when working with large volumes of data.

  12. Additive Pattern Database Heuristics

    CERN Document Server

    Felner, A; Korf, R E; 10.1613/jair.1480

    2011-01-01

    We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases, which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously, we showed how to statically partition the sliding-tile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods a...

  13. The PROSITE database.

    Science.gov (United States)

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/. PMID:16381852

  14. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  15. MINING TOPOLOGICAL RELATIONSHIP PATTERNS FROM SPATIOTEMPORAL DATABASES

    Directory of Open Access Journals (Sweden)

    K.Venkateswara Rao

    2012-01-01

    Full Text Available Mining topological relationship patterns involve three aspects. First one is the discovery of geometric relationships like disjoint, cover, intersection and overlap between every pair of spatiotemporal objects. Second one is tracking the change of such relationships with time from spatiotemporal databases. Third one is mining the topological relationship patterns. Spatiotemporal databases deal with changes to spatial objects with time. The applications in this domain process spatial, temporal and attribute data elements to find the evolution of spatial objects and changes in their topological relationships with time. These advanced database applications require storing, management and processing of complex spatiotemporal data. In this paper we discuss a model-view-controller based architecture of the system, the design of spatiotemporal database and methodology for mining spatiotemporal topological relationship patterns. Prototype implementation of the system is carried out on top of open source object relational spatial database management system called postgresql and postgis. The algorithms are experimented on historical cadastral datasets that are created using OpenJump. The resulting topological relationship patterns are presented.

  16. BNDB – The Biochemical Network Database

    Directory of Open Access Journals (Sweden)

    Kaufmann Michael

    2007-10-01

    Full Text Available Abstract Background Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. Description We present the Biochemical Network Database (BNDB, a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. Conclusion BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  17. Community-Built Databases

    CERN Document Server

    Pardede, Eric

    2011-01-01

    Wikipedia, Flickr, You Tube, Facebook, LinkedIn are all examples of large community-built databases, although with quite diverse purposes and collaboration patterns. Their usage and dissemination will further grow introducing e.g. new semantics, personalization, or interactive media. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems. Pardede delivers the first comprehensive rese

  18. The CHIANTI atomic database

    CERN Document Server

    Young, Peter R; Landi, Enrico; Del Zanna, Giulio; Mason, Helen

    2015-01-01

    The CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. The database has continued to be updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  19. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each...... having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...

  20. Pressen, personoplysninger og databaser

    DEFF Research Database (Denmark)

    Schaumburg-Müller, Sten

    2006-01-01

    Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven......Det undersøges i hvilket omfang persondatalovens til tider meget restriktive og ikke særlig medieegnede regler dækker journalistisk virksomhed, og der redegøres for den særlige regulering af mediers databaser og samspillet med persondataloven og medieansvarsloven...

  1. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus;

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  2. DataBase on demand

    CERN Document Server

    Aparicio, Ruben Gaspar; Coterillo Coz, I

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  3. DataBase on Demand

    Science.gov (United States)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  4. DataBase on Demand

    International Nuclear Information System (INIS)

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  5. Data Vault: providing simple web access to NRAO data archives

    Science.gov (United States)

    DuPlain, Ron; Benson, John; Sessoms, Eric

    2008-08-01

    In late 2007, the National Radio Astronomy Observatory (NRAO) launched Data Vault, a feature-rich web application for simplified access to NRAO data archives. This application allows users to submit a Google-like free-text search, and browse, download, and view further information on matching telescope data. Data Vault uses the model-view-controller design pattern with web.py, a minimalist open-source web framework built with the Python Programming Language. Data Vault implements an Ajax client built on the Google Web Toolkit (GWT), which creates structured JavaScript applications. This application supports plug-ins for linking data to additional web tools and services, including Google Sky. NRAO sought the inspiration of Google's remarkably elegant user interface and notable performance to create a modern search tool for the NRAO science data archive, taking advantage of the rapid development frameworks of web.py and GWT to create a web application on a short timeline, while providing modular, easily maintainable code. Data Vault provides users with a NRAO-focused data archive while linking to and providing more information wherever possible. Free-text search capabilities are possible (and even simple) with an innovative query parser. NRAO develops all software under an open-source license; Data Vault is available to developers and users alike.

  6. Database design: Community discussion board

    OpenAIRE

    Klepetko, Radim

    2009-01-01

    The goal of this thesis is designing a database for discussion board application, which will be able to provide classic discussion board functionality and web 2.0 features in addition. The emphasis lies on a precise description of the application requirements, which are used afterwards to design an optimal database model independent from technological implementations (chosen database system). In the end of my thesis the database design is tested using MySQL database system.

  7. The Database State Machine Approach

    OpenAIRE

    Pedone, Fernando; Guerraoui, Rachid; Schiper, Andre

    1999-01-01

    Database replication protocols have historically been built on top of distributed database systems, and have consequently been designed and implemented using distributed transactional mechanisms, such as atomic commitment. We present the Database State Machine approach, a new way to deal with database replication in a cluster of servers. This approach relies on a powerful atomic broadcast primitive to propagate transactions between database servers, and alleviates the need for atomic comm...

  8. GRAD: On Graph Database Modeling

    OpenAIRE

    Ghrab, Amine; Romero, Oscar; Skhiri, Sabri; Vaisman, Alejandro; Zimányi, Esteban

    2016-01-01

    Graph databases have emerged as the fundamental technology underpinning trendy application domains where traditional databases are not well-equipped to handle complex graph data. However, current graph databases support basic graph structures and integrity constraints with no standard algebra. In this paper, we introduce GRAD, a native and generic graph database model. GRAD goes beyond traditional graph database models, which support simple graph structures and constraints. Instead, GRAD pres...

  9. Clinical databases in physical therapy.

    OpenAIRE

    Swinkels, I.C.S.; Ende, C.H.M. van den; Bakker, D.; Wees, Ph.J van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists record data on their patients and treatments and to investigate the basic aspects, data sets, output, management, and data quality of the databases. Identification of the databases was performed by ...

  10. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.; Liu, K.; Kolkman, M.; Klarenberg, P.; Slooten, van F.; Ades, Y.; Slooten, van C.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  11. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  12. Database Technologies for RDF

    Science.gov (United States)

    Das, Souripriya; Srinivasan, Jagannathan

    Efficient and scalable support for RDF/OWL data storage, loading, inferencing and querying, in conjunction with already available support for enterprise level data and operations reliability requirements, can make databases suitable to act as enterprise-level RDF/OWL repository and hence become a viable platform for building semantic applications for the enterprise environments.

  13. Mathematics & database (open) access

    OpenAIRE

    Guillopé, Laurent

    2003-01-01

    The textual version of this presentation at the Conference "Open Access to Scientific and Technical Information: State of the Art and Future Trends" was published with the title 'Mathematics and databases: Open Access' in "Information Services and Use", vol. 23 (2003), issue 2-3, p. 127-131.

  14. Hydrocarbon Spectral Database

    Science.gov (United States)

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  15. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  16. LHCb distributed conditions database

    Science.gov (United States)

    Clemencic, M.

    2008-07-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here.

  17. Databases and data mining

    Science.gov (United States)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  18. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  19. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  20. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Update History of This Database Date Update contents 2014/05/07 The contact informatio...n is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Database... English archive site is opened. 2011/04/04 Trypanosomes Database ( htt...p://www.tanpaku.org/tdb/ ) is opened. Joomla SEF URLs by Artio About This Database Database Description Down

  1. On Simplifying Features in OpenStreetMap database

    Science.gov (United States)

    Qian, Xinlin; Tao, Kunwang; Wang, Liang

    2015-04-01

    Currently the visualization of OpenStreetMap data is using a tile server which stores map tiles that have been rendered from vector data in advance. However, tiled map are short of functionalities such as data editing and customized styling. To enable these advanced functionality, Client-side processing and rendering of geospatial data is needed. Considering the voluminous size of the OpenStreetMap data, simply sending region queries results of OSM database to client is prohibitive. To make the OSM data retrieved from database adapted for client receiving and rendering, It must be filtered and simplified at server-side to limit its volume. We propose a database extension for OSM database to make it possible to simplifying geospatial objects such as ways and relations during data queries. Several auxiliary tables and PL/pgSQL functions are presented to make the geospatial features can be simplified by omitting unimportant vertices. There are five components in the database extension: Vertices weight computation by polyline and polygon simplification algorithm, Vertices weight storage in auxiliary tables. filtering and selecting of vertices using specific threshold value during spatial queries, assembling of simplified geospatial objects using filtered vertices, vertices weight updating after geospatial objects editing. The database extension is implemented on an OSM APIDB using PL/pgSQL. The database contains a subset of OSM database. The experimental database contains geographic data of United Kingdom which is about 100 million vertices and roughly occupy 100GB disk. JOSM are used to retrieve the data from the database using a revised data accessing API and render the geospatial objects in real-time. When serving simplified data to client, The database allows user to set the bound of the error of simplification or the bound of responding time in each data query. Experimental results show the effectiveness and efficiency of the proposed methods in building a

  2. What is a lexicographical database?

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Skovgård Nielsen, Jesper

    2013-01-01

    50 years ago, no lexicographer used a database in the work process. Today, almost all dictionary projects incorporate databases. In our opinion, the optimal lexicographical database should be planned in cooperation between a lexicographer and a database specialist in each specific lexicographic...... project. Such cooperation will reach the highest level of success if the lexicographer has at least a basic knowledge of the topic presented in this paper: What is a database? This type of knowledge is also needed when the lexicographer describes an ongoing or a finished project. In this article, we...... provide the description of this type of cooperation, using the most important theoretical terms relevant in the planning of a database. It will be made clear that a lexicographical database is like any other database. The only difference is that an optimal lexicographical database is constructed to fulfil...

  3. Open geochemical database

    Science.gov (United States)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are

  4. CGMD: An integrated database of cancer genes and markers

    OpenAIRE

    Jangampalli Adi Pradeepkiran; Sri Bhashyam Sainath; Konidala Kramthi Kumar; Lokanada Balasubramanyam; Kodali Vidya Prabhakar; Matcha Bhaskar

    2015-01-01

    Integrating cancer genes and markers with experimental evidence might provide valuable information for the further investigation of crosstalk between tumor genes and markers in cancer biology. To achieve this objective, we developed a database known as the Cancer Gene Marker Database (CGMD), which integrates data on tumor genes and markers based on experimental evidence. The major goal of CGMD is to provide the following: 1) current systematic treatment approaches and recent advances in diffe...

  5. MICROCOMPUTER BASED DATABASE MANAGEMENT SYSTEMS IN SUPPORT OF OFFICE AUTOMATION

    OpenAIRE

    Egyhazy, Csaba J.

    1983-01-01

    The evolutionary advancements in microprocessor technology as it relates to database management systems (DBMSs) are dis¬cussed. Practice and experience with five commercially available database management systems are reported, based mostly on data gathered from a series of interviews focusing on comparison among systems. Several prototype systems specifically designed to meet the needs of office information systems are identified, their conceptual framework ascertained and capabilities de...

  6. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  7. An XCT image database system

    International Nuclear Information System (INIS)

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  8. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...earch and download Downlaod via FTP Joomla SEF URLs by Artio About This Database Database Description Download License Update History

  9. The magnet database system

    International Nuclear Information System (INIS)

    The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots

  10. Database on aircraft accidents

    International Nuclear Information System (INIS)

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  11. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  12. Physical database design for an object-oriented database system

    OpenAIRE

    Scholl, Marc H.

    1994-01-01

    Object-oriented database systems typically offer a variety of structuring capabilities to model complex objects. This flexibility, together with type (or class) hierarchies and computed "attributes"§ (methods), poses a high demand on the physical design of object-oriented databases. Similar to traditional databases, it is hardly ever true that the conceptual structure of the database is also a good, that is, effcient, internal one. Rather, data representing the conceptual objects may be stru...

  13. MMI Face Database

    OpenAIRE

    Maat, L.M.; Sondak, R.C.; Valstar, M.F.; Pantic, M.; Gaia, P.

    2005-01-01

    The automatic recognition of human facial expressions is an interesting research area in AI with a growing number of projects and researchers. In spite of repeated references to the need for a reference set of images that could provide a basis for benchmarking various techniques in automatic facial expression analysis, a readily accessible and complete enough database of face images does not exist yet. This lack represented our main incentive to develop a web-based, easily accessible, and eas...

  14. Formal aspects in databases

    International Nuclear Information System (INIS)

    From the beginning of the relational data models special attention has been paid to the theory of relations through the concepts of decomposition and dependency constraints. The initial goal of these works was devoted to the scheme design process. Most of the results are used in this area but serve as a basis for improvements of the model in several directions: incomplete information, universal relations, deductive databases, etc... (orig.)

  15. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  16. Modeling Digital Video Database

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.

  17. SEDA (SEed DAtabase)

    Czech Academy of Sciences Publication Activity Database

    Šerá, Božena

    Praha : Botanická zahrada hl. m. Prahy, 2005 - (Sekerka, P.), s. 64-65 ISBN 80-903697-0-7. [Introdukce a genetické zdroje rostlin. Botanické zahrady v novém ticíciletí. Praha (CZ), 05.09.2005] R&D Projects: GA MŠk(CZ) 1P05OC049 Institutional research plan: CEZ:AV0Z60870520 Keywords : database, seed, diaspore, fruit, Subject RIV: EH - Ecology, Behaviour

  18. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  19. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  20. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  2. Database usage and performance for the Fermilab Run II experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bonham, D.; Box, D.; Gallas, E.; Guo, Y.; Jetton, R.; Kovich, S.; Kowalkowski, J.; Kumar, A.; Litvintsev, D.; Lueking, L.; Stanfield, N.; Trumbo, J.; Vittone-Wiersma, M.; White, S.P.; Wicklund, E.; Yasuda, T.; /Fermilab; Maksimovic, P.; /Johns Hopkins U.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databases used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.

  3. Research on methods of designing and building digital seabed database

    Institute of Scientific and Technical Information of China (English)

    Su Tianyun; Liu Baohua; Zhai Shikui; Liang Ruicai; Zheng Yanpeng; Fu Qiang

    2007-01-01

    With a review of the recent development in digitalization and application of seabed data , this paper systematically proposed methods for integrating seabed data by analyzing its feature based on ORACLE database management system and advanced techniques of spatial data management. We did research on storage structure of seabed data, distributed- integrated database system, standardized spatial database and seabed metadata management system in order to effectively manage and use these seabed information in practical application . Finally , we applied the methods researched and proposed in this paper to build the Bohai Sea engineering geology database that stores engineering geology data and other seabed information from the Bohai Sea area . As a result , the Bohai Sea engineering geology database can effectively integrate huge amount of distributed and complicated seabed data to meet the practical requisition of Bohai Sea engineering geology environment exploration and exploitation.

  4. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  5. NNDC database migration project

    International Nuclear Information System (INIS)

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative Iγ) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have

  6. Human cancer databases (review).

    Science.gov (United States)

    Pavlopoulou, Athanasia; Spandidos, Demetrios A; Michalopoulos, Ioannis

    2015-01-01

    Cancer is one of the four major non‑communicable diseases (NCD), responsible for ~14.6% of all human deaths. Currently, there are >100 different known types of cancer and >500 genes involved in cancer. Ongoing research efforts have been focused on cancer etiology and therapy. As a result, there is an exponential growth of cancer‑associated data from diverse resources, such as scientific publications, genome‑wide association studies, gene expression experiments, gene‑gene or protein‑protein interaction data, enzymatic assays, epigenomics, immunomics and cytogenetics, stored in relevant repositories. These data are complex and heterogeneous, ranging from unprocessed, unstructured data in the form of raw sequences and polymorphisms to well‑annotated, structured data. Consequently, the storage, mining, retrieval and analysis of these data in an efficient and meaningful manner pose a major challenge to biomedical investigators. In the current review, we present the central, publicly accessible databases that contain data pertinent to cancer, the resources available for delivering and analyzing information from these databases, as well as databases dedicated to specific types of cancer. Examples for this wealth of cancer‑related information and bioinformatic tools have also been provided. PMID:25369839

  7. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  8. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  9. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  10. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  11. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  12. Perspective: Interactive material property databases through aggregation of literature data

    Science.gov (United States)

    Seshadri, Ram; Sparks, Taylor D.

    2016-05-01

    Searchable, interactive, databases of material properties, particularly those relating to functional materials (magnetics, thermoelectrics, photovoltaics, etc.) are curiously missing from discussions of machine-learning and other data-driven methods for advancing new materials discovery. Here we discuss the manual aggregation of experimental data from the published literature for the creation of interactive databases that allow the original experimental data as well additional metadata to be visualized in an interactive manner. The databases described involve materials for thermoelectric energy conversion, and for the electrodes of Li-ion batteries. The data can be subject to machine-learning, accelerating the discovery of new materials.

  13. Database for environmental monitoring at nuclear facilities

    International Nuclear Information System (INIS)

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well in advance of nuclear facilities operation. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of food, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate code structures used to access databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing respective measurable physical phenomena. The definitional data table uniquely defines the variable symbols by relating them to respective data domains for the respective phenomena represented by the symbols. Well established rules of how the data should be stored and accessed, are given in the Relational Database Theory. The theory comprise of guidelines such as the avoidance of duplicating data using technique call normalization and how to identify the unique identifier for a database record. (author)

  14. Hyperdatabase: A schema for browsing multiple databases

    International Nuclear Information System (INIS)

    In order to insure effective information retrieval, a user may need to search multiple databases on multiple systems. Although front end systems have been developed to assist the user in accessing different systems, they access one retrieval system at a time and the search has to be repeated for each required database on each retrieval system. More importantly, the user interacts with the results as independent sessions. This paper models multiple bibliographic databases distributed over one or more retrieval systems as a hyperdatabase, i.e., a single virtual database. The hyperdatabase is viewed as a hypergraph in which each node represents a bibliographic item and the links among nodes represent relations among the items. In the response to a query, bibliographic items are extracted from the hyperdatabase and linked together to form a transient hypergraph. This hypergraph is transient in the sense that it is ''created'' in response to a query and only ''exists'' for the duration of the query session. A hypertext interface permits the user to browse the transient hypergraph in a nonlinear manner. The technology to implement a system based on this model is available now, consisting of powerful workstation, distributed processing, high-speed communications, and CD-ROMs. As the technology advances and costs decrease such systems should be generally available. (author). 13 refs, 5 figs

  15. SOME ASPECTS REGARDING THE INTERNATIONAL DATABASES NOWADAYS

    Directory of Open Access Journals (Sweden)

    Emilian M. DOBRESCU

    2015-01-01

    Full Text Available A national database (NDB or an international one (abbreviated IDB, also named often as “data bank”, represents a method of storing some information and data on an external device (a storage device, with the possibility of an easy extension or an easy way to quickly find these information. Therefore, through IDB we don`t only understand a bibliometric or bibliographic index, which is a collection of references, that normally represents the “soft”, but also the respective IDB “hard”, which is the support and the storage technology. Usually, a database – a very comprehensive notion in the computer’s science – is a bibliographic index, compiled with specific purpose, objectives and means. In reality, the national and international databases are operated through management systems, usually electronic and informational, based on advanced manipulation technologies in the virtual space. On line encyclopedias can also be considered and are important international database (IDB. WorldCat, for example, is a world catalogue, that included the identification data for the books within circa 71.000 libraries in 112 countries, data classified through Online Computer Library Center (OCLC, with the participation of the libraries in the respective countries, especially of those that are national library.

  16. Advanced Accelerator Concepts

    International Nuclear Information System (INIS)

    This conference proceedings represent the results of theThird Advanced Accelerator Concepts Workshop held in PortJefferson, New York. The workshop was sponsored by the U.S.Department of Energy, the Office of Navel Research and BrookhavenNational Laboratory. The purpose was to assess new techniques forproduction of ultra-high gradient acceleration and to addressengineering issues in achieving this goal. There are eighty-onepapers collected in the proceedings and all have been abstractedfor the database

  17. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  18. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  19. Shark Mark Recapture Database (MRDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Shark Mark Recapture Database is a Cooperative Research Program database system used to keep multispecies mark-recapture information in a common format for...

  20. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  1. Development of the Nuclear Ship Database. 1. Outline of the Nuclear Ship Experimental Database

    Energy Technology Data Exchange (ETDEWEB)

    Kyouya, Masahiko; Ochiai, Masa-aki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Hashidate, Kouji

    1995-03-01

    We obtained the experimental data on the effects of the ship motions and the change in load and caused by the ship operations, the waves, the winds etc., to the nuclear power plant behavior, through the Power-up Tests and Experimental Voyages of the Nuclear Ship MUTSU. Moreover, we accumulated the techniques, the knowledge and others on the Nuclear Ship development at the each stage of the N.S. MUTSU Research and Development program, such as the design stage, the construction stage, the operation stage and others. These data, techniques, knowledge and others are the assembly of the experimental data and the experiences through the design, the construction and the operation of the first nuclear ship in JAPAN. It is important to keep and pigeonhole these products of the N.S. MUTSU program in order to utilize them effectively in the research and development of the advanced marine reactor, since there is no construction plan of the nuclear ship for the present in JAPAN. We have been carrying out the development of the Nuclear Ship Database System since 1991 for the purpose of effective utilization of the N.S. MUTSU products in the design study of the advanced marine reactors. The part of the Nuclear Ship Database System on the experimental data, called Nuclear Ship Experimental Database, was already accomplished and utilized since 1993. This report describes the outline and the use of the Nuclear Ship Experimental Database.The remaining part of the database system on the documentary data, called Nuclear Ship Documentary Database, are now under development. (author).

  2. A Selection of Data Structure for SMART Alarm System Database

    International Nuclear Information System (INIS)

    A design goal of SMART Alarm System is providing intelligence alarm information to operator in main control room. To achive this, we should apply advanced alarm process logics and manage alarm data sets for advanced alarm logic. SMART Alarm System must analyze a lot of alarm by the cycle to determines alarms. For this, performance optimization of database is essential. Especially, high performance of search function is required. In this paper, we propose most a suitable search method to database by compare several search methods

  3. Database Systems - Present and Future

    OpenAIRE

    Ion LUNGU; Manole VELICANU; Iuliana BOTHA

    2009-01-01

    The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summariz...

  4. Hydrogen Leak Detection Sensor Database

    Science.gov (United States)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  5. Techniques for multiple database integration

    OpenAIRE

    Whitaker, Barron D

    1997-01-01

    Approved for public release; distribution is unlimited There are several graphic client/server application development tools which can be used to easily develop powerful relational database applications. However, they do not provide a direct means of performing queries which require relational joins across multiple database boundaries. This thesis studies ways to access multiple databases. Specifically, it examines how a 'cross-database join' can be performed. A case study of techniques us...

  6. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  7. Information technology of database integration

    OpenAIRE

    Черненко, Николай Владимирович

    2012-01-01

    The article considers the problem that has developed in the course of decentralized organization automation, and its possible solutions. To eliminate the defects it was suggested to integrate the databases of heterogeneous information systems into a single database of organization. A study of existing methodologies, approaches and practices of the integration of databases took place. The article suggests the formal description of the information technology of database integration, which allow...

  8. DRAM BASED PARAMETER DATABASE OPTIMIZATION

    OpenAIRE

    Marcinkevicius, Tadas

    2012-01-01

    This thesis suggests an improved parameter database implementation for one of Ericsson products. The parameter database is used during the initialization of the system as well as during the later operation. The database size is constantly growing because the parameter database is intended to be used with different hardware configurations. When a new technology platform is released, multiple revisions with additional features and functionalities are later created, resulting in introduction of ...

  9. Content independence in multimedia databases

    OpenAIRE

    Vries, de, P.M.

    2001-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. The notions of content abstraction and content independence are introduced, which clearly expose the unique challenges (for database architecture) of applications in...

  10. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us DMPD Database... Description General information of database Database name DMPD Alternative name Dynamic Macrophage Pathway CSML Databas...108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database classification Metabolic and Signaling P...malia Taxonomy ID: 40674 Database description DMPD collects pathway models of transcriptional regulation and... signal transduction in CSML format for dymamic simulation based on the curation of descriptions about LPS a

  11. Knowledge Discovery in Databases and Libraries

    Directory of Open Access Journals (Sweden)

    Anil Kumar Dhiman

    2011-11-01

    Full Text Available The advancement in information and communication technology (ICT has outpaced our abilities to analyse, summarise, and extract knowledge from the data. Today, database technology has provided us with the basic tools for the efficient storage and lookup of large data sets, but the issue of how to help human beings to understand and analyse large bodies of data remains a difficult and unsolved problem. So, intelligent tools for automated data mining and knowledge discovery are needed to deal with enormous data. As library and information centre are considered the backbone of knowledge organisation, knowledge discovery in databases (KDD is also getting attention of library and information scientists. This paper highlights the basics of KDD process and its importance in digital libraries.

  12. Web interfaces to relational databases

    Science.gov (United States)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  13. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills. PMID:26421518

  14. The Astrobiology Habitable Environments Database (AHED)

    Science.gov (United States)

    Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.

    2015-12-01

    The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.

  15. Databases for Data Mining

    OpenAIRE

    LANGOF, LADO

    2015-01-01

    This work is about looking for synergies between data mining tools and databa\\-se management systems (DBMS). Imagine a situation where we need to solve an analytical problem using data that are too large to be processed solely inside the main physical memory and at the same time too small to put data warehouse or distributed analytical system in place. The target area is therefore a single personal computer that is used to solve data mining problems. We are looking for tools that allows us to...

  16. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  17. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  18. Nuclear database management systems

    International Nuclear Information System (INIS)

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  19. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  20. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  1. Caching in Multidimensional Databases

    OpenAIRE

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at lea...

  2. Secrets of the Oracle Database

    CERN Document Server

    Debes, Norbert

    2009-01-01

    Secrets of the Oracle Database is the definitive guide to undocumented and partially documented features of the Oracle database server. Covering useful but little-known features from Oracle9i Database through Oracle Database 11g, this book will improve your efficiency as an Oracle database administrator or developer. Norbert Debes shines the light of day on features that help you master more difficult administrative, tuning, and troubleshooting tasks than you ever thought possible. Finally, in one place, you have at your fingertips knowledge that previously had to be acquired through years of

  3. Federated Spatial Databases and Interoperability

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given

  4. Large Science Databases – Are Cloud Services Ready for Them?

    Directory of Open Access Journals (Sweden)

    Ani Thakar

    2011-01-01

    Full Text Available We report on attempts to put an astronomical database – the Sloan Digital Sky Survey science archive – in the cloud. We find that it is very frustrating to impossible at this time to migrate a complex SQL Server database into current cloud service offerings such as Amazon (EC2 and Microsoft (SQL Azure. Certainly it is impossible to migrate a large database in excess of a TB, but even with (much smaller databases, the limitations of cloud services make it very difficult to migrate the data to the cloud without making changes to the schema and settings that would degrade performance and/or make the data unusable. Preliminary performance comparisons show a large performance discrepancy with the Amazon cloud version of the SDSS database. These difficulties suggest that much work and coordination needs to occur between cloud service providers and their potential clients before science databases – not just large ones but even smaller databases that make extensive use of advanced database features for performance and usability – can successfully and effectively be deployed in the cloud. We describe a powerful new computational instrument that we are developing in the interim – the Data-Scope – that will enable fast and efficient analysis of the largest (petabyte scale scientific datasets.

  5. IPD: the Immuno Polymorphism Database.

    Science.gov (United States)

    Robinson, James; Marsh, Steven G E

    2007-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer cell immunoglobulin-like receptors (KIRs); IPD-MHC, a database of sequences of the major histocompatibility complex (MHC) of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTAB, which provides access to the European Searchable Tumour Cell Line Database, a cell bank of immunologically characterized melanoma cell lines. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. Those sections with similar data, such as IPD-KIR and IPD-MHC, share the same database structure. PMID:18449992

  6. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... database and visualization systems. The thesis describes other techniques that extend the functionality of an observer aware database to support the extraction of the N most visible objects. This functionality is particularly useful if the number of newly visible objects is still too large. The thesis...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer...

  7. SHORT SURVEY ON GRAPHICAL DATABASE

    Directory of Open Access Journals (Sweden)

    Harsha R Vyavahare

    2015-08-01

    Full Text Available This paper explores the features of graph databases and data models. The popularity towards work with graph models and datasets has been increased in the recent decades .Graph database has a number of advantage over the relational database. This paper take a short review on the graph and hyper graph concepts from mathematics so that graph so that we can understand the existing difficulties in the implantation of graph model. From the Past few decades saw hundreds of research contributions their vast research in the DBS field with graph database. However, the research on the existence of general purpose DBS managements and mining that suits for variety of applications is still very much active. The review is done based on the Application of graph model techniques in the database within the framework of graph based approaches with the aim of implementation of different graphical database and tabular database

  8. Caching in Multidimensional Databases

    CERN Document Server

    Szépkúti, István

    2011-01-01

    One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined ...

  9. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  10. Certifying the interoperability of RDF database systems

    OpenAIRE

    Rafes, Karima; Nauroy, Julien; Germain, Cécile

    2015-01-01

    International audience In March 2013, the W3C recommended SPARQL 1.1 to retrieve and manipulate decentralized RDF data. Real-world usage requires advanced features of SPARQL 1.1. recommendations As these are not consistently implemented, we propose a test framework named TFT (Tests for Triple stores) to test the interoperability of the SPARQL end-point of RDF database systems. This framework can execute the W3C's SPARQL 1.1 test suite and also its own tests of interoperability. To help the...

  11. A database of worldwide glacier thickness observations

    OpenAIRE

    Gärtner-Roer, I.; Naegeli, K.; M. Huss; Knecht, T.; Machguth, Horst; Zemp, M.

    2014-01-01

    One of the grand challenges in glacier research is to assess the total ice volume and its global distribution. Over the past few decades the compilation of a world glacier inventory has been well-advanced both in institutional set-up and in spatial coverage. The inventory is restricted to glacier surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, w...

  12. Model organism databases: essential resources that need the support of both funders and users.

    Science.gov (United States)

    Oliver, Stephen G; Lock, Antonia; Harris, Midori A; Nurse, Paul; Wood, Valerie

    2016-01-01

    Modern biomedical research depends critically on access to databases that house and disseminate genetic, genomic, molecular, and cell biological knowledge. Even as the explosion of available genome sequences and associated genome-scale data continues apace, the sustainability of professionally maintained biological databases is under threat due to policy changes by major funding agencies. Here, we focus on model organism databases to demonstrate the myriad ways in which biological databases not only act as repositories but actively facilitate advances in research. We present data that show that reducing financial support to model organism databases could prove to be not just scientifically, but also economically, unsound. PMID:27334346

  13. Automatic pattern localization across layout database and photolithography mask

    Science.gov (United States)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  14. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  15. Measuring Database Objects Relatedness in Peer-to-Peer Databases

    OpenAIRE

    M. Basel Almourad

    2013-01-01

    Peer-to-Peer database management systems have become an important topic in the last few years. They rise up p2p technology to exploit the power of available distributed database management system technologies. Identifying relationship between different peer schema objects is one of the main activities so semantically relevant peer database management systems can be acquainted and become close in the overlay. In this paper we present our approach that measures the similarity of peer schema obj...

  16. The SIMBAD astronomical database

    CERN Document Server

    Wenger, M; Egret, D; Dubois, P; Bonnarel, F; Borde, S; Genova, F; Jasniewicz, G; Laloe, S; Lesteven, S; Monier, R; Wenger, Marc; Ochsenbein, Francois; Egret, Daniel; Dubois, Pascal; Bonnarel, Francois; Borde, Suzanne; Genova, Francoise; Jasniewicz, Gerard; Laloe, Suzanne; Lesteven, Soizick; Monier, Richard

    2000-01-01

    Simbad is the reference database for identification and bibliography ofastronomical objects. It contains identifications, `basic data', bibliography,and selected observational measurements for several million astronomicalobjects. Simbad is developed and maintained by CDS, Strasbourg. Building thedatabase contents is achieved with the help of several contributing institutes.Scanning the bibliography is the result of the collaboration of CDS withbibliographers in Observatoire de Paris (DASGAL), Institut d'Astrophysique deParis, and Observatoire de Bordeaux. When selecting catalogues and tables forinclusion, priority is given to optimal multi-wavelength coverage of thedatabase, and to support of research developments linked to large projects. Inparallel, the systematic scanning of the bibliography reflects the diversityand general trends of astronomical research. A WWW interface to Simbad is available at: http://simbad.u-strasbg.fr/Simbad

  17. Ageing Management Program Database

    International Nuclear Information System (INIS)

    The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)

  18. The CHIANTI atomic database

    Science.gov (United States)

    Young, P. R.; Dere, K. P.; Landi, E.; Del Zanna, G.; Mason, H. E.

    2016-04-01

    The freely available CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. It contains data and software for modeling optically thin atom and positive ion emission from low density (≲1013 cm-3) plasmas from x-ray to infrared wavelengths. A key feature is that the data are assessed and regularly updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  19. Database Programming Languages

    DEFF Research Database (Denmark)

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...... to present the very best work at the intersection of dataase and programming language research. The proceedings include a paper based on the invited talk by Wenfie Fan and the 16 contributed papers that were selected by at least three members of the program committee. In addition, the program commitee sought...... for their assistance and sound counsel and the organizers of VLDB 2007 for taking care of the local organization of DBPL....

  20. Database design for a kindergarten Pastelka

    OpenAIRE

    Grombíř, Tomáš

    2010-01-01

    This bachelor thesis deals with analysis, creation of database for a kindergarten and installation of the designed database into the database system MySQL. Functionality of the proposed database was verified through an application written in PHP.

  1. Higher Education Database Created by Family and Consumer Sciences Taskforce

    Science.gov (United States)

    Vincenti, Virginia B.; Stewart, Barbara L.

    2007-01-01

    The Taskforce for Higher Education Program Advancement (TFPA) represents five family and consumer sciences organizations, boards, and administrative groups. Its mission "to proactively and collaboratively strengthen FCS higher education programs" has yielded multiple initiatives including a new database. The TFPA Web Interface System (TFPA-WIS)…

  2. Application of graph database for analytical tasks

    OpenAIRE

    Günzl, Richard

    2014-01-01

    This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to desi...

  3. Negative Database for Data Security

    CERN Document Server

    Patel, Anup; Eirinaki, Magdalini

    2011-01-01

    Data Security is a major issue in any web-based application. There have been approaches to handle intruders in any system, however, these approaches are not fully trustable; evidently data is not totally protected. Real world databases have information that needs to be securely stored. The approach of generating negative database could help solve such problem. A Negative Database can be defined as a database that contains huge amount of data consisting of counterfeit data along with the real data. Intruders may be able to get access to such databases, but, as they try to extract information, they will retrieve data sets that would include both the actual and the negative data. In this paper we present our approach towards implementing the concept of negative database to help prevent data theft from malicious users and provide efficient data retrieval for all valid users.

  4. Table manipulation in simplicial databases

    CERN Document Server

    Spivak, David I

    2010-01-01

    In \\cite{Spi}, we developed a category of databases in which the schema of a database is represented as a simplicial set. Each simplex corresponds to a table in the database. There, our main concern was to find a categorical formulation of databases; the simplicial nature of the schemas was to some degree unexpected and unexploited. In the present note, we show how to use this geometric formulation effectively on a computer. If we think of each simplex as a polygonal tile, we can imagine assembling custom databases by mixing and matching tiles. Queries on this database can be performed by drawing paths through the resulting tile formations, selecting records at the start-point of this path and retrieving corresponding records at its end-point.

  5. Biological Databases for Behavioral Neurobiology

    OpenAIRE

    Baker, Erich J

    2012-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the variou...

  6. HINDI LANGUAGE INTERFACE TO DATABASES

    OpenAIRE

    Himani Jain

    2011-01-01

    The need for Hindi Language interface has become increasingly accurate as native people are using databases for storing the data. Large number of e-governance applications like agriculture, weather forecasting, railways, legacy matters etc use databases. So, to use such database applications with ease, people who are more comfortable with Hindi language, require these applications to accept a simple sentence in Hindi, and process it to generate a SQL query, which is further executed on the da...

  7. Working with Documents in Databases

    OpenAIRE

    Marian DARDALA; Cristian IONITA

    2008-01-01

    Using on a larger and larger scale the electronic documents within organizations and public institutions requires their storage and unitary exploitation by the means of databases. The purpose of this article is to present the way of loading, exploitation and visualization of documents in a database, taking as example the SGBD MSSQL Server. On the other hand, the modules for loading the documents in the database and for their visualization will be presented through code sequences written in C#...

  8. Database Preservation: The DBPreserve Approach

    OpenAIRE

    Arif Ur Rahman; Muhammad Muzammal; Gabriel David; Cristina Ribeiro

    2015-01-01

    In many institutions relational databases are used as a tool for managing information related to day to day activities. Institutions may be required to keep the information stored in relational databases accessible because of many reasons including legal requirements and institutional policies. However, the evolution in technology and change in users with the passage of time put the information stored in relational databases in danger. In the long term the information may become inaccessible ...

  9. Performance Introspection of Graph Databases

    OpenAIRE

    Macko, Peter; Margo, Daniel Wyatt; Seltzer, Margo I.

    2013-01-01

    The explosion of graph data in social and biological networks, recommendation systems, provenance databases, etc. makes graph storage and processing of paramount importance. We present a performance introspection framework for graph databases, PIG, which provides both a toolset and methodology for understanding graph database performance. PIG consists of a hierarchical collection of benchmarks that compose to produce performance models; the models provide a way to illuminate the strengths and...

  10. SHORT SURVEY ON GRAPHICAL DATABASE

    OpenAIRE

    Harsha R Vyavahare; Dr.P.P.Karde

    2015-01-01

    This paper explores the features of graph databases and data models. The popularity towards work with graph models and datasets has been increased in the recent decades .Graph database has a number of advantage over the relational database. This paper take a short review on the graph and hyper graph concepts from mathematics so that graph so that we can understand the existing difficulties in the implantation of graph model. From the Past few decades saw hundreds of research contributions the...

  11. Cloud Database Management System (CDBMS)

    OpenAIRE

    Snehal B. Shende; Prajakta P. Chapke

    2015-01-01

    Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the r...

  12. Inorganic Crystal Structure Database (ICSD)

    Science.gov (United States)

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  13. Data-mining chess databases

    OpenAIRE

    Bleicher, Eiko; Haworth, Guy McCrossan; Van Der Heijden, Harold M.J.F.

    2010-01-01

    This is a report on the data-mining of two chess databases, the objective being to compare their sub-7-man content with perfect play as documented in Nalimov endgame tables. Van der Heijden’s ENDGAME STUDY DATABASE IV is a definitive collection of 76,132 studies in which White should have an essentially unique route to the stipulated goal. Chessbase’s BIG DATABASE 2010 holds some 4.5 million games. Insight gained into both database content and data-mining has led to some delightful surprises ...

  14. Working with Documents in Databases

    Directory of Open Access Journals (Sweden)

    Marian DARDALA

    2008-01-01

    Full Text Available Using on a larger and larger scale the electronic documents within organizations and public institutions requires their storage and unitary exploitation by the means of databases. The purpose of this article is to present the way of loading, exploitation and visualization of documents in a database, taking as example the SGBD MSSQL Server. On the other hand, the modules for loading the documents in the database and for their visualization will be presented through code sequences written in C#. The interoperability between averages will be carried out by the means of ADO.NET technology of database access.

  15. The flux database concerted action

    International Nuclear Information System (INIS)

    This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)

  16. Geomagnetic Observatory Database February 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA National Centers for Environmental Information (formerly National Geophysical Data Center) maintains an active database of worldwide geomagnetic...

  17. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard;

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design and...... implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  18. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available w in using this database. The license for this database is specified in the Creative Commons Attribution-Sha...Trypanosomes DB © Targeted Proteins Research Program licensed under CC Attribution...-Share Alike 2.1 Japan . The summary of the Creative Commons Attribution-Share Alike 2.1 Japan is found her

  19. Similarity search and data mining techniques for advanced database systems.

    OpenAIRE

    Pryakhin, Alexey

    2006-01-01

    Modern automated methods for measurement, collection, and analysis of data in industry and science are providing more and more data with drastically increasing structure complexity. On the one hand, this growing complexity is justified by the need for a richer and more precise description of real-world objects, on the other hand it is justified by the rapid progress in measurement and analysis techniques that allow the user a versatile exploration of objects. In order to manage the huge volum...

  20. TRIP Database 2.0: A Manually Curated Information Hub for Accessing TRP Channel Interaction Network

    OpenAIRE

    Young-Cheul Shin; Soo-Yong Shin; Jung Nyeo Chun; Hyeon Sung Cho; Jin Muk Lim; Hong-Gee Kim; Insuk So; Dongseop Kwon; Ju-Hong Jeon

    2012-01-01

    Transient receptor potential (TRP) channels are a family of Ca(2+)-permeable cation channels that play a crucial role in biological and disease processes. To advance TRP channel research, we previously created the TRIP (TRansient receptor potential channel-Interacting Protein) Database, a manually curated database that compiles scattered information on TRP channel protein-protein interactions (PPIs). However, the database needs to be improved for information accessibility and data utilization...

  1. Comparative study on Authenticated Sub Graph Similarity Search in Outsourced Graph Database

    OpenAIRE

    N. D. Dhamale; Prof. S. R. Durugkar

    2015-01-01

    Today security is very important in the database system. Advanced database systems face a great challenge raised by the emergence of massive, complex structural data in bioinformatics, chem-informatics, and many other applications. Since exact matching is often too restrictive, similarity search of complex structures becomes a vital operation that must be supported efficiently. The Subgraph similarity search is used in graph databases to retrieve graphs whose subgraphs...

  2. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  3. Use of Genomic Databases for Inquiry-Based Learning about Influenza

    Science.gov (United States)

    Ledley, Fred; Ndung'u, Eric

    2011-01-01

    The genome projects of the past decades have created extensive databases of biological information with applications in both research and education. We describe an inquiry-based exercise that uses one such database, the National Center for Biotechnology Information Influenza Virus Resource, to advance learning about influenza. This database…

  4. Fuel element database: developer handbook

    International Nuclear Information System (INIS)

    The fuel elements database which was developed for Atomic Institute of the Austrian Universities is described. The software uses standards like HTML, PHP and SQL. For the standard installation freely available software packages such as MySQL database or the PHP interpreter from Apache Software Foundation and Java Script were used. (nevyjel)

  5. XCOM: Photon Cross Sections Database

    Science.gov (United States)

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  6. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content...

  7. The ENZYME database in 2000.

    Science.gov (United States)

    Bairoch, A

    2000-01-01

    The ENZYME database is a repository of information related to the nomenclature of enzymes. In recent years it has became an indispensable resource for the development of metabolic databases. The current version contains information on 3705 enzymes. It is available through the ExPASy WWW server (http://www.expasy.ch/enzyme/ ). PMID:10592255

  8. A Molecular Biology Database Digest

    OpenAIRE

    Bry, François; Kröger, Peer

    2000-01-01

    Computational Biology or Bioinformatics has been defined as the application of mathematical and Computer Science methods to solving problems in Molecular Biology that require large scale data, computation, and analysis [18]. As expected, Molecular Biology databases play an essential role in Computational Biology research and development. This paper introduces into current Molecular Biology databases, stressing data modeling, data acquisition, data retrieval, and the integration...

  9. EXPERIMENTAL EVALUATION OF NOSQL DATABASES

    Directory of Open Access Journals (Sweden)

    Veronika Abramova

    2014-10-01

    Full Text Available Relational databases are a technology used universally that enables storage, management and retrieval of varied data schemas. However, execution of requests can become a lengthy and inefficient process for some large databases. Moreover, storing large amounts of data requires servers with larger capacities and scalability capabilities. Relational databases have limitations to deal with scalability for large volumes of data. On the other hand, non-relational database technologies, also known as NoSQL, were developed to better meet the needs of key-value storage of large amounts of records. But there is a large amount of NoSQL candidates, and most have not been compared thoroughly yet. The purpose of this paper is to compare different NoSQL databases, to evaluate their performance according to the typical use for storing and retrieving data. We tested 10 NoSQL databases with Yahoo! Cloud Serving Benchmark using a mix ofoperations to better understand the capability of non-relational databases for handling different requests, and to understand how performance is affected by each database type and their internal mechanisms.

  10. Content independence in multimedia databases

    NARCIS (Netherlands)

    Vries, A.P. de

    2001-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for the design

  11. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  12. Neural Networks and Database Systems

    CERN Document Server

    Schikuta, Erich

    2008-01-01

    Object-oriented database systems proved very valuable at handling and administrating complex objects. In the following guidelines for embedding neural networks into such systems are presented. It is our goal to treat networks as normal data in the database system. From the logical point of view, a neural network is a complex data value and can be stored as a normal data object. It is generally accepted that rule-based reasoning will play an important role in future database applications. The knowledge base consists of facts and rules, which are both stored and handled by the underlying database system. Neural networks can be seen as representation of intensional knowledge of intelligent database systems. So they are part of a rule based knowledge pool and can be used like conventional rules. The user has a unified view about his knowledge base regardless of the origin of the unique rules.

  13. The Danish Fetal Medicine Database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte K; Petersen, Olav B; Jørgensen, Finn S;

    2015-01-01

    OBJECTIVE: To describe the establishment and organization of the Danish Fetal Medicine Database and to report national results of first-trimester combined screening for trisomy 21 in the 5-year period 2008-2012. DESIGN: National register study using prospectively collected first-trimester screening...... data from the Danish Fetal Medicine Database. POPULATION: Pregnant women in Denmark undergoing first-trimester screening for trisomy 21. METHODS: Data on maternal characteristics, biochemical and ultrasonic markers are continuously sent electronically from local fetal medicine databases (Astraia Gmbh......%. The national screen-positive rate increased from 3.6% in 2008 to 4.7% in 2012. The national detection rate of trisomy 21 was reported to be between 82 and 90% in the 5-year period. CONCLUSION: A national fetal medicine database has been successfully established in Denmark. Results from the database have shown...

  14. The experimental database

    International Nuclear Information System (INIS)

    A large number of new standards measurements have been carried out since the completion of the ENDF/B-VI standards evaluation. Furthermore, some measurements used in that evaluation have undergone changes that also need to be incorporated into the new evaluation of the standards. Measurements now exist for certain standards up to 200 MeV. These measurements, as well as those used in the ENDF/B-VI evaluation of the standards, have been included in the database for the new international evaluation of the neutron cross-section standards. Many of the experiments agree well with the ENDF/B-VI evaluations. However, some problems have been observed: There was conflict with the H(n,n) differential cross-section around 14 MeV and at about 190 MeV. New measurements of the 10B branching ratio suggested a problem, although additional experimental work indicated that the ENDF/B-VI values are generally reasonable. Differences were observed for the 10B total cross-section and the 10B(n,α1γ) cross-section. Except for possible differences near 270 keV, the 197Au(n,g) cross-section measurements are generally in agreement with the ENDF/B-VI evaluation. New measurements of the 235U(n,f) cross section indicate higher values above 15 MeV. There is concern with some new absolute 238U(n,f) cross-section measurements since they indicate larger values than supportive 238U(n,f)/235U(n,f) cross-section ratio measurements in the 5-20 MeV energy region. At very high energies there are significant differences in the 238U(n,f)/235U(n,f) cross section ratio - the maximum difference exceeds 5% at 200 MeV

  15. The Eruption Forecasting Information System (EFIS) database project

    Science.gov (United States)

    Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather

    2016-04-01

    The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.

  16. Update History of This Database - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Open TG-GATEs Pathological Image Database Update History of This Database Date Update contents 2012/05/24 Op...tio About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History

  17. Advanced Ceramics

    International Nuclear Information System (INIS)

    The First Florida-Brazil Seminar on Materials and the Second State Meeting about new materials in Rio de Janeiro State show the specific technical contribution in advanced ceramic sector. The others main topics discussed for the development of the country are the advanced ceramic programs the market, the national technic-scientific capacitation, the advanced ceramic patents, etc. (C.G.C.)

  18. Database tomography for commercial application

    Science.gov (United States)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  19. Database Systems - Present and Future

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  20. Unifying Memory and Database Transactions

    Science.gov (United States)

    Dias, Ricardo J.; Lourenço, João M.

    Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

  1. Database on wind characteristics. Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    The main objective of IEA R&D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured windfield time series observed in a wide range of...... environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R&D Annex XVII falls in three separate parts. Partone deals with the overall structure and philosophy behind the database, part two accounts in details...... for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i...

  2. The Berlin Emissivity Database

    Science.gov (United States)

    Helbert, Jorn

    Remote sensing infrared spectroscopy is the principal field of investigation for planetary surfaces composition. Past, present and future missions to the solar system bodies include in their payload instruments measuring the emerging radiation in the infrared range. TES on Mars Global Surveyor and THEMIS on Mars Odyssey have in many ways changed our views of Mars. The PFS instrument on the ESA Mars Express mission has collected spectra since the beginning of 2004. In spring 2006 the VIRTIS experiment started its operation on the ESA Venus Express mission, allowing for the first time to map the surface of Venus using the 1 µm emission from the surface. The MERTIS spectrometer is included in the payload of the ESA BepiColombo mission to Mercury, scheduled for 2013. For the interpretation of the measured data an emissivity spectral library of planetary analogue materials is needed. The Berlin Emissivity Database (BED) presented here is focused on relatively fine-grained size separates, providing a realistic basis for interpretation of thermal emission spectra of planetary regoliths. The BED is therefore complimentary to existing thermal emission libraries, like the ASU library for example. The BED contains currently entries for plagioclase and potassium feldspars, low Ca and high Ca pyroxenes, olivine, elemental sulphur, common martian analogues (JSC Mars-1, Salten Skov, palagonites, montmorillonite) and a lunar highland soil sample measured in the wavelength range from 3 to 50 µm as a function of particle size. For each sample, the spectra of four well defined particle size separates (¡25 µm , 25-63 µm, 63-125 µm, 125-250 µm) are measured with a 4 cm-1 spectral resolution. These size separates have been selected as typical representations for most of the planetary surfaces. Following an ongoing upgrade of the Planetary Emmissivity Laboratory (PEL) at DLR in Berlin measurements can be obtained at temperatures up to 500° C - realistic for the dayside conditions

  3. Cloudsat tropical cyclone database

    Science.gov (United States)

    Tourville, Natalie D.

    CloudSat (CS), the first 94 GHz spaceborne cloud profiling radar (CPR), launched in 2006 to study the vertical distribution of clouds. Not only are CS observations revealing inner vertical cloud details of water and ice globally but CS overpasses of tropical cyclones (TC's) are providing a new and exciting opportunity to study the vertical structure of these storm systems. CS TC observations are providing first time vertical views of TC's and demonstrate a unique way to observe TC structure remotely from space. Since December 2009, CS has intersected every globally named TC (within 1000 km of storm center) for a total of 5,278 unique overpasses of tropical systems (disturbance, tropical depression, tropical storm and hurricane/typhoon/cyclone (HTC)). In conjunction with the Naval Research Laboratory (NRL), each CS TC overpass is processed into a data file containing observational data from the afternoon constellation of satellites (A-TRAIN), Navy's Operational Global Atmospheric Prediction System Model (NOGAPS), European Center for Medium range Weather Forecasting (ECMWF) model and best track storm data. This study will describe the components and statistics of the CS TC database, present case studies of CS TC overpasses with complementary A-TRAIN observations and compare average reflectivity stratifications of TC's across different atmospheric regimes (wind shear, SST, latitude, maximum wind speed and basin). Average reflectivity stratifications reveal that characteristics in each basin vary from year to year and are dependent upon eye overpasses of HTC strength storms and ENSO phase. West Pacific (WPAC) basin storms are generally larger in size (horizontally and vertically) and have greater values of reflectivity at a predefined height than all other basins. Storm structure at higher latitudes expands horizontally. Higher vertical wind shear (≥ 9.5 m/s) reduces cloud top height (CTH) and the intensity of precipitation cores, especially in HTC strength storms

  4. On-Change Publishing of Database Resident Control System Data

    CERN Document Server

    Kostro, K; Roderick, C

    2009-01-01

    The CERN accelerator control system is largely data driven, based on a distributed Oracle® database architecture. Many application programs depend on the latest values of key pieces of information such as beam mode and accelerator mode. Rather than taking the non-scalable approach of polling the database for the latest values, the CERN control system addresses this requirement by making use of the Oracle Advanced Queuing – an implementation based on JMS (Java Message Service) – to publish data changes throughout the control system via the CERN Controls Middleware (CMW). This paper describes the architecture of the system, the implementation choices and the experience so far.

  5. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  6. Database characterisation of HEP applications

    Science.gov (United States)

    Piorkowski, Mariusz; Grancher, Eric; Topurov, Anton

    2012-12-01

    Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to the overall performance, the predictability of the response is a key requirement to ensure smooth operations and delivering predictability requires understanding the applications from the ground up. Fortunately, database management systems provide several tools to check, measure, analyse and gather useful information. We present our experiences characterising the performance of several typical HEP database applications performance characterisations that were used to deliver improved predictability and scalability as well as for optimising the hardware platform choice as we migrated to new hardware and Oracle 11g.

  7. Database characterisation of HEP applications

    International Nuclear Information System (INIS)

    Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to the overall performance, the predictability of the response is a key requirement to ensure smooth operations and delivering predictability requires understanding the applications from the ground up. Fortunately, database management systems provide several tools to check, measure, analyse and gather useful information. We present our experiences characterising the performance of several typical HEP database applications performance characterisations that were used to deliver improved predictability and scalability as well as for optimising the hardware platform choice as we migrated to new hardware and Oracle 11g.

  8. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    , systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...... on desktop research, by Annex 62 participants, namely by the authors. So far the VC database contains 91 buildings, located in Denmark, Ireland and Austria. Further contributions from other countries are expected. The building-datasheets offer illustrative descriptions of buildings of different...

  9. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  10. Practical database programming with Java

    CERN Document Server

    Bai, Ying

    2011-01-01

    "This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--

  11. OECD/NEA thermochemical database

    International Nuclear Information System (INIS)

    This state of the art report is to introduce the contents of the Chemical Data-Service, OECD/NEA, and the results of survey by OECD/NEA for the thermodynamic and kinetic database currently in use. It is also to summarize the results of Thermochemical Database Projects of OECD/NEA. This report will be a guide book for the researchers easily to get the validate thermodynamic and kinetic data of all substances from the available OECD/NEA database. (author). 75 refs

  12. Biological Databases for Human Research

    Institute of Scientific and Technical Information of China (English)

    Dong Zou; Lina Ma; Jun Yu; Zhang Zhang

    2015-01-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation.

  13. Development of a safeguards database

    International Nuclear Information System (INIS)

    1. Introduction to Database. An effort has been made to computerize the information regarding safeguards inspections conducted at our facilities, the IAEA inspectors designated for Pakistan, material inventory at our safeguarded facilities and our safeguards agreements with the Agency. The information on these components such as searching of inspectors of a particular nationality ever designated to Pakistan, the material inventory of a certain kind of nuclear material present at any of the safeguarded facility, or the information on the type of safeguards agreement covering any facility can be easily traced using the database that provides flexible search options. The Relational Database Management System ACCESS has been used for the development of the database. All the information is managed from a single database file. Within the file, the Safeguards data is divided into separate tables, one for each type of information. Relationships are defined between the tables to bring the data from multiple tables together in a query, form, or report. Various forms have been created to edit/add data indirectly to the tables. To find and retrieve the data that is frequently required, including data from multiple tables, various queries have been made. Reports have been created using these queries, to present data in a certain format. In order to create application to navigate around the Safeguards Database, the Switchboard Manager has been used that automatically creates switchboard Forms that helps to navigate around the database. The switchboard Form has buttons that can be clicked to open various forms and reports or open other switchboards that can further open additional forms and reports. The Safeguards Database also employs various macros and event procedures to automate common tasks. In order to secure the Safeguards database, the simplest method that is to set a password for opening the Database has been used. 2. Components of Safeguards Database (DB). There are

  14. VariVis: a visualisation toolkit for variation databases

    Directory of Open Access Journals (Sweden)

    Smith Timothy D

    2008-04-01

    Full Text Available Abstract Background With the completion of the Human Genome Project and recent advancements in mutation detection technologies, the volume of data available on genetic variations has risen considerably. These data are stored in online variation databases and provide important clues to the cause of diseases and potential side effects or resistance to drugs. However, the data presentation techniques employed by most of these databases make them difficult to use and understand. Results Here we present a visualisation toolkit that can be employed by online variation databases to generate graphical models of gene sequence with corresponding variations and their consequences. The VariVis software package can run on any web server capable of executing Perl CGI scripts and can interface with numerous Database Management Systems and "flat-file" data files. VariVis produces two easily understandable graphical depictions of any gene sequence and matches these with variant data. While developed with the goal of improving the utility of human variation databases, the VariVis package can be used in any variation database to enhance utilisation of, and access to, critical information.

  15. "TPSX: Thermal Protection System Expert and Material Property Database"

    Science.gov (United States)

    Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)

    1997-01-01

    The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.

  16. Gas Hydrate Research Database and Web Dissemination Channel

    Energy Technology Data Exchange (ETDEWEB)

    Micheal Frenkel; Kenneth Kroenlein; V Diky; R.D. Chirico; A. Kazakow; C.D. Muzny; M. Frenkel

    2009-09-30

    To facilitate advances in application of technologies pertaining to gas hydrates, a United States database containing experimentally-derived information about those materials was developed. The Clathrate Hydrate Physical Property Database (NIST Standard Reference Database {number_sign} 156) was developed by the TRC Group at NIST in Boulder, Colorado paralleling a highly-successful database of thermodynamic properties of molecular pure compounds and their mixtures and in association with an international effort on the part of CODATA to aid in international data sharing. Development and population of this database relied on the development of three components of information-processing infrastructure: (1) guided data capture (GDC) software designed to convert data and metadata into a well-organized, electronic format, (2) a relational data storage facility to accommodate all types of numerical and metadata within the scope of the project, and (3) a gas hydrate markup language (GHML) developed to standardize data communications between 'data producers' and 'data users'. Having developed the appropriate data storage and communication technologies, a web-based interface for both the new Clathrate Hydrate Physical Property Database, as well as Scientific Results from the Mallik 2002 Gas Hydrate Production Research Well Program was developed and deployed at http://gashydrates.nist.gov.

  17. Nuclear Energy Infrastructure Database Description and User’s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  18. Tidal Creek Sentinel Habitat Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ecological Research, Assessment and Prediction's Tidal Creeks: Sentinel Habitat Database was developed to support the National Oceanic and Atmospheric...

  19. Database Technology in Digital Libraries.

    Science.gov (United States)

    Preston, Carole; Lin, Binshan

    2002-01-01

    Reviews issues relating to database technology and digital libraries. Highlights include resource sharing; ongoing digital library projects; metadata; user interfaces; query processing; interoperability; data quality; copyright infringement; and managerial implications, including electronic versus printed materials, accessibility,…

  20. Human Exposure Database System (HEDS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Human Exposure Database System (HEDS) provides public access to data sets, documents, and metadata from EPA on human exposure. It is primarily intended for...

  1. JPL Small Body Database Browser

    Data.gov (United States)

    National Aeronautics and Space Administration — The JPL Small-Body Database Browser provides data for all known asteroids and many comets. Newly discovered objects and their orbits are added on a daily basis....

  2. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2014-01-01

    Full Text Available The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MySQL running on Linux as the destination. The method applied in this research is prototyping in which the processes of development and testing can be done interactively and repeatedly. The key result of this research is that the replication technology applied, which is called Oracle GoldenGate, can successfully manage to do its task in replicating data in real-time and heterogeneous platforms.

  3. Great Lakes Environmental Database (GLENDA)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Great Lakes Environmental Database (GLENDA) houses environmental data on a wide variety of constituents in water, biota, sediment, and air in the Great Lakes...

  4. The Nucleic Acid Database (NDB)

    Czech Academy of Sciences Publication Activity Database

    Berman, H. M.; Feng, Z.; Schneider, Bohdan; Westbrook, J.; Zardecki, C.

    Vol. F. Dordrecht : Kluwer Academic, 2001 - (Rossmann, M.; Arnold, E.), s. 657-682 Institutional research plan: CEZ:AV0Z4040901 Keywords : database * nucleic acid Subject RIV: CF - Physical ; Theoretical Chemistry

  5. DATABASE, AIKEN COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  6. Drinking Water Treatability Database (TDB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities,...

  7. Consolidated Human Activities Database (CHAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Consolidated Human Activity Database (CHAD) contains data obtained from human activity studies that were collected at city, state, and national levels. CHAD is...

  8. The Nuclear Science References Database

    CERN Document Server

    Pritychenko, B; Singh, B; Totans, J

    2013-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center http://www.nndc.bnl.gov/nsr and the International Atomic Energy Agency http://www-nds.iaea.org/nsr.

  9. Biological Sample Monitoring Database (BSMDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Biological Sample Monitoring Database System (BSMDBS) was developed for the Northeast Fisheries Regional Office and Science Center (NER/NEFSC) to record and...

  10. E3 Portfolio Review Database

    Data.gov (United States)

    US Agency for International Development — The E3 Portfolio Review Database houses operational and performance data for all activities that the Bureau funds and/or manages. Activity-level data is collected...

  11. National Patient Care Database (NPCD)

    Data.gov (United States)

    Department of Veterans Affairs — The National Patient Care Database (NPCD), located at the Austin Information Technology Center, is part of the National Medical Information Systems (NMIS). The NPCD...

  12. FAKE FACE DATABASE AND PREPROCESSING

    Directory of Open Access Journals (Sweden)

    Aruni Singh

    2013-02-01

    Full Text Available Face plays an ethical role in human interaction compared to other biometrics. It is most popular non-intrusive and non-invasive biometrics whose image can easily be snapped without user co-operation. That is why; criminals and imposters always try to tamper their facial identity. Therefore, face tampering detection is one of the most important bottlenecks for security, commercial and industrial orbit. Face tampering detection is one of the most important bottlenecks for security, commercial and industrial orbit. In particular, few researchers have addressed the challenges for the disguise detection but inadequacy is benchmark database in public domain. This paper addresses these problems by preparing of three category of tampered face database within the framework of FRT (Facial Recognition Technology and evaluates the performance of this database on face recognition algorithms. These categories of database are dummy, colour imposed and masked face.

  13. National Benthic Infaunal Database (NBID)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NBID is a quantitative database on abundances of individual benthic species by sample and study region, along with other synoptically measured environmental...

  14. Air Compliance Complaint Database (ACCD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Air Compliance Complaint Database (ACCD) which logs all air pollution complaints...

  15. InterAction Database (IADB)

    Science.gov (United States)

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  16. Faculty expertise database updated, improved

    OpenAIRE

    Trulove, Susan

    2007-01-01

    To help business, industry, government, and media representatives find Virginia Tech faculty members and graduate students with specific expertise, the Office of the Vice President for Research has expanded the Virginia Tech Expertise Database and made it easier to use.

  17. Database of Interacting Proteins (DIP)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent...

  18. Update History of This Database - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available s by Artio About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - KOME | LSDB Archive ...

  19. Update History of This Database - Taxonomy Icon | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available of This Database Site Policy | Contact Us Update History of This Database - Taxonomy Icon | LSDB Archive ... ...by Artio About This Database Database Description Download License Update History

  20. Update History of This Database - Plabrain DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available y of This Database Site Policy | Contact Us Update History of This Database - Plabrain DB | LSDB Archive ... ... is opened. Joomla SEF URLs by Artio About This Database Database Description Download License Update Histor