WorldWideScience

Sample records for database management research

  1. Concierge: Personal database software for managing digital research resources

    Directory of Open Access Journals (Sweden)

    Hiroyuki Sakai

    2007-11-01

    Full Text Available This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literaturemanagement, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp.

  2. Development of operation management database for research reactors

    International Nuclear Information System (INIS)

    Zhang Xinjun; Chen Wei; Yang Jun

    2005-01-01

    An Operation Database for Pulsed Reactor has been developed on the platform for Microsoft visual C++ 6.0. This database includes four function modules, fuel elements management, incident management, experiment management and file management. It is essential for reactor security and information management. (authors)

  3. DOG-SPOT database for comprehensive management of dog genetic research data

    Directory of Open Access Journals (Sweden)

    Sutter Nathan B

    2010-12-01

    Full Text Available Abstract Research laboratories studying the genetics of companion animals have no database tools specifically designed to aid in the management of the many kinds of data that are generated, stored and analyzed. We have developed a relational database, "DOG-SPOT," to provide such a tool. Implemented in MS-Access, the database is easy to extend or customize to suit a lab's particular needs. With DOG-SPOT a lab can manage data relating to dogs, breeds, samples, biomaterials, phenotypes, owners, communications, amplicons, sequences, markers, genotypes and personnel. Such an integrated data structure helps ensure high quality data entry and makes it easy to track physical stocks of biomaterials and oligonucleotides.

  4. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  5. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  6. Research reactor records in the INIS database

    International Nuclear Information System (INIS)

    Marinkovic, N.

    2001-01-01

    This report presents a statistical analysis of more than 13,000 records of publications concerned with research and technology in the field of research and experimental reactors which are included in the INIS Bibliographic Database for the period from 1970 to 2001. The main objectives of this bibliometric study were: to make an inventory of research reactor related records in the INIS Database; to provide statistics and scientific indicators for the INIS users, namely science managers, researchers, engineers, operators, scientific editors and publishers, decision-makers in the field of research reactors related subjects; to extract other useful information from the INIS Bibliographic Database about articles published in research reactors research and technology. (author)

  7. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    Science.gov (United States)

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  8. Wireless Sensor Networks Database: Data Management and Implementation

    Directory of Open Access Journals (Sweden)

    Ping Liu

    2014-04-01

    Full Text Available As the core application of wireless sensor network technology, Data management and processing have become the research hotspot in the new database. This article studied mainly data management in wireless sensor networks, in connection with the characteristics of the data in wireless sensor networks, discussed wireless sensor network data query, integrating technology in-depth, proposed a mobile database structure based on wireless sensor network and carried out overall design and implementation for the data management system. In order to achieve the communication rules of above routing trees, network manager uses a simple maintenance algorithm of routing trees. Design ordinary node end, server end in mobile database at gathering nodes and mobile client end that can implement the system, focus on designing query manager, storage modules and synchronous module at server end in mobile database at gathering nodes.

  9. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  10. Distributed Database Management Systems A Practical Approach

    CERN Document Server

    Rahimi, Saeed K

    2010-01-01

    This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworksâ€"implemented using J2SE with JMS, J2EE, and Microsoft .Netâ€"that readers can use to learn how to implement a distributed database management system. IT and

  11. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  12. Column-oriented database management systems

    OpenAIRE

    Možina, David

    2013-01-01

    In the following thesis I will present column-oriented database. Among other things, I will answer on a question why there is a need for a column-oriented database. In recent years there have been a lot of attention regarding a column-oriented database, even if the existence of a columnar database management systems dates back in the early seventies of the last century. I will compare both systems for a database management – a colum-oriented database system and a row-oriented database system ...

  13. Establishment of database system for management of KAERI wastes

    International Nuclear Information System (INIS)

    Shon, J. S.; Kim, K. J.; Ahn, S. J.

    2004-07-01

    Radioactive wastes generated by KAERI has various types, nuclides and characteristics. To manage and control these kinds of radioactive wastes, it comes to need systematic management of their records, efficient research and quick statistics. Getting information about radioactive waste generated and stored by KAERI is the basic factor to construct the rapid information system for national cooperation management of radioactive waste. In this study, Radioactive Waste Management Integration System (RAWMIS) was developed. It is is aimed at management of record of radioactive wastes, uplifting the efficiency of management and support WACID(Waste Comprehensive Integration Database System) which is a national radioactive waste integrated safety management system of Korea. The major information of RAWMIS supported by user's requirements is generation, gathering, transfer, treatment, and storage information for solid waste, liquid waste, gas waste and waste related to spent fuel. RAWMIS is composed of database, software (interface between user and database), and software for a manager and it was designed with Client/Server structure. RAWMIS will be a useful tool to analyze radioactive waste management and radiation safety management. Also, this system is developed to share information with associated companies. Moreover, it can be expected to support the technology of research and development for radioactive waste treatment

  14. Design and utilization of a Flight Test Engineering Database Management System at the NASA Dryden Flight Research Facility

    Science.gov (United States)

    Knighton, Donna L.

    1992-01-01

    A Flight Test Engineering Database Management System (FTE DBMS) was designed and implemented at the NASA Dryden Flight Research Facility. The X-29 Forward Swept Wing Advanced Technology Demonstrator flight research program was chosen for the initial system development and implementation. The FTE DBMS greatly assisted in planning and 'mass production' card preparation for an accelerated X-29 research program. Improved Test Plan tracking and maneuver management for a high flight-rate program were proven, and flight rates of up to three flights per day, two times per week were maintained.

  15. MonetDB: Two Decades of Research in Column-oriented Database Architectures

    NARCIS (Netherlands)

    Idreos, S.; Groffen, F.; Nes, N.; Manegold, S.; Mullender, S.; Kersten, M.

    2012-01-01

    MonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management research,

  16. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  17. PlantDB – a versatile database for managing plant research

    Directory of Open Access Journals (Sweden)

    Gruissem Wilhelm

    2008-01-01

    Full Text Available Abstract Background Research in plant science laboratories often involves usage of many different species, cultivars, ecotypes, mutants, alleles or transgenic lines. This creates a great challenge to keep track of the identity of experimental plants and stored samples or seeds. Results Here, we describe PlantDB – a Microsoft® Office Access database – with a user-friendly front-end for managing information relevant for experimental plants. PlantDB can hold information about plants of different species, cultivars or genetic composition. Introduction of a concise identifier system allows easy generation of pedigree trees. In addition, all information about any experimental plant – from growth conditions and dates over extracted samples such as RNA to files containing images of the plants – can be linked unequivocally. Conclusion We have been using PlantDB for several years in our laboratory and found that it greatly facilitates access to relevant information.

  18. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  19. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    Science.gov (United States)

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  20. Geo-scientific database for research and development purposes

    International Nuclear Information System (INIS)

    Tabani, P.; Mangeot, A.; Crabol, V.; Delage, P.; Dewonck, S.; Auriere, C.

    2012-01-01

    Document available in extended abstract form only. The Research and Development Division must manage, secure and reliable manner, a large number of data from scientific disciplines and diverse means of acquisition (observations, measurements, experiments, etc.). This management is particularly important for the Underground research Laboratory, the source of many recording continuous measurements. Thus, from its conception, Andra has implemented two management tools of scientific information, the 'Acquisition System and Data Management' [SAGD] and GEO database with its associated applications. Beyond its own needs, Andra wants to share its achievements with the scientific community, and it therefore provides the data stored in its databases or samples of rock or water when they are available. Acquisition and Data Management (SAGD) This system manages data from sensors installed at several sites. Some sites are on the surface (piezometric, atmospheric and environmental stations), the other are in the Underground Research Laboratory. This system also incorporates data from experiments in which Andra participates in Mont Terri Laboratory in Switzerland. S.A.G.D fulfils these objectives by: - Make available in real time on a single system, with scientists from Andra but also different partners or providers who need it, all experimental data from measurement points - Displaying the recorded data on temporal windows and specific time step, - Allowing remote control of the experimentations, - Ensuring the traceability of all recorded information, - Ensuring data storage in a data base. S.A.G.D has been deployed in the first experimental drift at -445 m in November 2004. It was subsequently extended to the underground Mont Terri laboratory in Switzerland in 2005, to the entire surface logging network of the Meuse / Haute-Marne Center in 2008 and to the environmental network in 2011. All information is acquired, stored and manage by a software called Geoscope. This software

  1. Djeen (Database for Joomla!'s Extensible Engine): a research information management system for flexible multi-technology project administration.

    Science.gov (United States)

    Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain

    2013-06-06

    With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.

  2. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  3. GiSAO.db: a database for ageing research

    Directory of Open Access Journals (Sweden)

    Grillari Johannes

    2011-05-01

    Full Text Available Abstract Background Age-related gene expression patterns of Homo sapiens as well as of model organisms such as Mus musculus, Saccharomyces cerevisiae, Caenorhabditis elegans and Drosophila melanogaster are a basis for understanding the genetic mechanisms of ageing. For an effective analysis and interpretation of expression profiles it is necessary to store and manage huge amounts of data in an organized way, so that these data can be accessed and processed easily. Description GiSAO.db (Genes involved in senescence, apoptosis and oxidative stress database is a web-based database system for storing and retrieving ageing-related experimental data. Expression data of genes and miRNAs, annotation data like gene identifiers and GO terms, orthologs data and data of follow-up experiments are stored in the database. A user-friendly web application provides access to the stored data. KEGG pathways were incorporated and links to external databases augment the information in GiSAO.db. Search functions facilitate retrieval of data which can also be exported for further processing. Conclusions We have developed a centralized database that is very well suited for the management of data for ageing research. The database can be accessed at https://gisao.genome.tugraz.at and all the stored data can be viewed with a guest account.

  4. Development of the severe accident risk information database management system SARD

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies

  5. Development of the severe accident risk information database management system SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies.

  6. Selection of nuclear power information database management system

    International Nuclear Information System (INIS)

    Zhang Shuxin; Wu Jianlei

    1996-01-01

    In the condition of the present database technology, in order to build the Chinese nuclear power information database (NPIDB) in the nuclear industry system efficiently at a high starting point, an important task is to select a proper database management system (DBMS), which is the hinge of the matter to build the database successfully. Therefore, this article explains how to build a practical information database about nuclear power, the functions of different database management systems, the reason of selecting relation database management system (RDBMS), the principles of selecting RDBMS, the recommendation of ORACLE management system as the software to build database and so on

  7. Scheme of database structure on decommissioning of the research reactor

    International Nuclear Information System (INIS)

    Park, H. S.; Park, S. K.; Kim, H. R.; Lee, D. K.; Jung, K. J.

    2001-01-01

    ISP (Information Strategy Planning), which is the first step of the whole database development, has been studied to manage effectively information and data related to the decommissioning activities of the Korea Research Reactor 1 and 2 (KRR-1 and 2). Since Korea has not acquired the technology of the decommissioning database management system, some record management system (RMS) of large nuclear facilities of national experience such as in the U.S.A, Japan, Belgium, and Russian were reviewed. In order to construct the database structure of the whole decommissioning activities such as the working information, radioactive waste treatment, and radiological surveying and analysis has been extracted from the whole dismantling process. These information and data will be used as the basic data to analyzed the matrix to find the entity relationship diagram and will contribute to the establishment of a business system design and the development of a decommissioning database system as well

  8. Design research of uranium mine borehole database

    International Nuclear Information System (INIS)

    Xie Huaming; Hu Guangdao; Zhu Xianglin; Chen Dehua; Chen Miaoshun

    2008-01-01

    With short supply of energy sources, exploration of uranium mine have been enhanced, but data storage, analysis and usage of exploration data of uranium mine are not highly computerized currently in China, the data is poor shared and used that it can not adapt the need of production and research. It will be well done, if the data are stored and managed in a database system. The concept structure design, logic structure design and data integrity checks are discussed according to the demand of applications and the analysis of exploration data of uranium mine. An application of the database is illustrated finally. (authors)

  9. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    Science.gov (United States)

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  10. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  11. Database Support for Workflow Management: The WIDE Project

    NARCIS (Netherlands)

    Grefen, P.W.P.J.; Pernici, B; Sánchez, G.; Unknown, [Unknown

    1999-01-01

    Database Support for Workflow Management: The WIDE Project presents the results of the ESPRIT WIDE project on advanced database support for workflow management. The book discusses the state of the art in combining database management and workflow management technology, especially in the areas of

  12. Waste management research abstracts. Information on radioactive waste management research in progress or planned. Vol. 30

    International Nuclear Information System (INIS)

    2005-11-01

    This issue contains 90 abstracts that describe research in progress in the field of radioactive waste management. The abstracts present ongoing work in various countries and international organizations. Although the abstracts are indexed by country, some programmes are actually the result of co-operation among several countries. Indeed, a primary reason for providing this compilation of programmes, institutions and scientists engaged in research into radioactive waste management is to increase international co-operation and facilitate communications. Data provided by researchers for publication in WMRA 30 were entered into a research in progress database named IRAIS (International Research Abstracts Information System). The IRAIS database is available via the Internet at the following URL: http://www.iaea.org/programmes/irais/ This database will continue to be updated as new abstracts are submitted by researchers world-wide. The abstracts are listed by country (full name) in alphabetical order. All abstracts are in English. The volume includes six indexes: principal investigator, title, performing organization, descriptors (key words), topic codes and country

  13. Management of virtualized infrastructure for physics databases

    International Nuclear Information System (INIS)

    Topurov, Anton; Gallerani, Luigi; Chatal, Francois; Piorkowski, Mariusz

    2012-01-01

    Demands for information storage of physics metadata are rapidly increasing together with the requirements for its high availability. Most of the HEP laboratories are struggling to squeeze more from their computer centers, thus focus on virtualizing available resources. CERN started investigating database virtualization in early 2006, first by testing database performance and stability on native Xen. Since then we have been closely evaluating the constantly evolving functionality of virtualisation solutions for database and middle tier together with the associated management applications – Oracle's Enterprise Manager and VM Manager. This session will detail our long experience in dealing with virtualized environments, focusing on newest Oracle OVM 3.0 for x86 and Oracle Enterprise Manager functionality for efficiently managing your virtualized database infrastructure.

  14. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    Science.gov (United States)

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  15. Microcomputer Database Management Systems for Bibliographic Data.

    Science.gov (United States)

    Pollard, Richard

    1986-01-01

    Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)

  16. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  17. MonetDB: Two Decades of Research in Column-oriented Database Architectures

    OpenAIRE

    Idreos, Stratos; Groffen, Fabian; Nes, Niels; Manegold, Stefan; Mullender, Sjoerd; Kersten, Martin

    2012-01-01

    textabstractMonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management research, accumulating on average more than 10,000 downloads on a monthly basis. This paper gives a brief overview of the MonetDB technology as it developed over the past two decades and the main r...

  18. Managing the BABAR Object Oriented Database

    International Nuclear Information System (INIS)

    Hasan, Adil

    2002-01-01

    The BaBar experiment stores its data in an Object Oriented federated database supplied by Objectivity/DB(tm). This database is currently 350TB in size and is expected to increase considerably as the experiment matures. Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner. We discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC

  19. Ageing Management Program Database

    International Nuclear Information System (INIS)

    Basic, I.; Vrbanic, I.; Zabric, I.; Savli, S.

    2008-01-01

    The aspects of plant ageing management (AM) gained increasing attention over the last ten years. Numerous technical studies have been performed to study the impact of ageing mechanisms on the safe and reliable operation of nuclear power plants. National research activities have been initiated or are in progress to provide the technical basis for decision making processes. The long-term operation of nuclear power plants is influenced by economic considerations, the socio-economic environment including public acceptance, developments in research and the regulatory framework, the availability of technical infrastructure to maintain and service the systems, structures and components as well as qualified personnel. Besides national activities there are a number of international activities in particular under the umbrella of the IAEA, the OECD and the EU. The paper discusses the process, procedure and database developed for Slovenian Nuclear Safety Administration (SNSA) surveillance of ageing process of Nuclear power Plant Krsko.(author)

  20. Content And Multimedia Database Management Systems

    NARCIS (Netherlands)

    de Vries, A.P.

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data

  1. [The future of clinical laboratory database management system].

    Science.gov (United States)

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  2. Using Large Diabetes Databases for Research.

    Science.gov (United States)

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  3. The Use of a Relational Database in Qualitative Research on Educational Computing.

    Science.gov (United States)

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  4. Radiation safety research information database

    International Nuclear Information System (INIS)

    Yukawa, Masae; Miyamoto, Kiriko; Takeda, Hiroshi; Kuroda, Noriko; Yamamoto, Kazuhiko

    2004-01-01

    National Institute of Radiological Sciences in Japan began to construct Radiation Safety Research Information Database' in 2001. The research information database is of great service to evaluate the effects of radiation on people by estimating exposure dose by determining radiation and radioactive matters in the environment. The above database (DB) consists of seven DB such as Nirs Air Borne Dust Survey DB, Nirs Environmental Tritium Survey DB, Nirs Environmental Carbon Survey DB, Environmental Radiation Levels, Abe, Metabolic Database for Assessment of Internal Dose, Graphs of Predicted Monitoring Data, and Nirs nuclear installation environment water tritium survey DB. Outline of DB and each DB are explained. (S.Y.)

  5. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  6. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  7. Reexamining Operating System Support for Database Management

    OpenAIRE

    Vasil, Tim

    2003-01-01

    In 1981, Michael Stonebraker [21] observed that database management systems written for commodity operating systems could not effectively take advantage of key operating system services, such as buffer pool management and process scheduling, due to expensive overhead and lack of customizability. The “not quite right” fit between these kernel services and the demands of database systems forced database designers to work around such limitations or re-implement some kernel functionality in user ...

  8. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  9. Land and Waste Management Research Publications

    Science.gov (United States)

    Resources from the Science Inventory database of EPA's Office of Research and Development, as well as EPA's Science Matters journal, include research on managing contaminated sites and ground water modeling and decontamination technologies.

  10. Plant operation data collection and database management using NIC system

    International Nuclear Information System (INIS)

    Inase, S.

    1990-01-01

    The Nuclear Information Center (NIC), a division of the Central Research Institute of Electric Power Industry, collects nuclear power plant operation and maintenance information both in Japan and abroad and transmits the information to all domestic utilities so that it can be effectively utilized for safe plant operation and reliability enhancement. The collected information is entered into the database system after being key-worded by NIC. The database system, Nuclear Information database/Communication System (NICS), has been developed by NIC for storage and management of collected information. Objectives of keywords are retrieval and classification by the keyword categories

  11. Database on veterinary clinical research in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Albrecht, Henning

    2010-07-01

    The aim of the present report is to provide an overview of the first database on clinical research in veterinary homeopathy. Detailed searches in the database 'Veterinary Clinical Research-Database in Homeopathy' (http://www.carstens-stiftung.de/clinresvet/index.php). The database contains about 200 entries of randomised clinical trials, non-randomised clinical trials, observational studies, drug provings, case reports and case series. Twenty-two clinical fields are covered and eight different groups of species are included. The database is free of charge and open to all interested veterinarians and researchers. The database enables researchers and veterinarians, sceptics and supporters to get a quick overview of the status of veterinary clinical research in homeopathy and alleviates the preparation of systematical reviews or may stimulate reproductions or even new studies. 2010 Elsevier Ltd. All rights reserved.

  12. Development of a computational database for application in Probabilistic Safety Analysis of nuclear research reactors

    International Nuclear Information System (INIS)

    Macedo, Vagner dos Santos

    2016-01-01

    The objective of this work is to present the computational database that was developed to store technical information and process data on component operation, failure and maintenance for the nuclear research reactors located at the Nuclear and Energy Research Institute (Instituto de Pesquisas Energéticas e Nucleares, IPEN), in São Paulo, Brazil. Data extracted from this database may be applied in the Probabilistic Safety Analysis of these research reactors or in less complex quantitative assessments related to safety, reliability, availability and maintainability of these facilities. This database may be accessed by users of the corporate network, named IPEN intranet. Professionals who require the access to the database must be duly registered by the system administrator, so that they will be able to consult and handle the information. The logical model adopted to represent the database structure is an entity-relationship model, which is in accordance with the protocols installed in IPEN intranet. The open-source relational database management system called MySQL, which is based on the Structured Query Language (SQL), was used in the development of this work. The PHP programming language was adopted to allow users to handle the database. Finally, the main result of this work was the creation a web application for the component reliability database named PSADB, specifically developed for the research reactors of IPEN; furthermore, the database management system provides relevant information efficiently. (author)

  13. Managing the BaBar object oriented database

    International Nuclear Information System (INIS)

    Hasan, A.; Trunov, A.

    2001-01-01

    The BaBar experiment stores its data in an Object Oriented federated database supplied by Objectivity/DB(tm). This database is currently 350TB in size and is expected to increase considerably as the experiment matures. Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner. The authors discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC

  14. A plant resource and experiment management system based on the Golm Plant Database as a basic tool for omics research

    Directory of Open Access Journals (Sweden)

    Selbig Joachim

    2008-05-01

    names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute. Conclusion The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.

  15. Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program

    Science.gov (United States)

    Lomatch, Diane; Truax, Terry; Savage, Peter

    1981-01-01

    A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.

  16. Access database application in medical treatment management platform

    International Nuclear Information System (INIS)

    Wu Qingming

    2014-01-01

    For timely, accurate and flexible access to medical expenses data, we applied Microsoft Access 2003 database management software, and we finished the establishment of a management platform for medical expenses. By developing management platform for medical expenses, overall hospital costs for medical expenses can be controlled to achieve a real-time monitoring of medical expenses. Using the Access database management platform for medical expenses not only changes the management model, but also promotes a sound management system for medical expenses. (authors)

  17. Database basic design for safe management radioactive waste

    International Nuclear Information System (INIS)

    Son, D. C.; Ahn, K. I.; Jung, D. J.; Cho, Y. B.

    2003-01-01

    As the amount of radioactive waste and related information to be managed are increasing, some organizations are trying or planning to computerize the management on radioactive waste. When we consider that information on safe management of radioactive waste should be used in association with national radioactive waste management project, standardization of data form and its protocol is required, Korea Institute of Nuclear Safety(KINS) will establish and operate nationwide integrated database in order to effectively manage a large amount of information on national radioactive waste. This database allows not only to trace and manage the trend of radioactive waste occurrence and in storage but also to produce reliable analysis results for the quantity accumulated. Consequently, we can provide necessary information for national radioactive waste management policy and related industry's planing. This study explains the database design which is the essential element for information management

  18. Performance Enhancements for Advanced Database Management Systems

    OpenAIRE

    Helmer, Sven

    2000-01-01

    New applications have emerged, demanding database management systems with enhanced functionality. However, high performance is a necessary precondition for the acceptance of such systems by end users. In this context we developed, implemented, and tested algorithms and index structures for improving the performance of advanced database management systems. We focused on index structures and join algorithms for set-valued attributes.

  19. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  20. Database management in the new GANIL control system

    International Nuclear Information System (INIS)

    Lecorche, E.; Lermine, P.

    1993-01-01

    At the start of the new control system design, decision was made to manage the huge amount of data by means of a database management system. The first implementations built on the INGRES relational database are described. Real time and data management domains are shown, and problems induced by Ada/SQL interfacing are briefly discussed. Database management concerns the whole hardware and software configuration for the GANIL pieces of equipment and the alarm system either for the alarm configuration or for the alarm logs. An other field of application encompasses the beam parameter archiving as a function of the various kinds of beams accelerated at GANIL (ion species, energies, charge states). (author) 3 refs., 4 figs

  1. The development of technical database of advanced spent fuel management process

    Energy Technology Data Exchange (ETDEWEB)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig.

  2. The development of technical database of advanced spent fuel management process

    International Nuclear Information System (INIS)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig

  3. [Role and management of cancer clinical database in the application of gastric cancer precision medicine].

    Science.gov (United States)

    Li, Yuanfang; Zhou, Zhiwei

    2016-02-01

    Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.

  4. NGNP Risk Management Database: A Model for Managing Risk

    International Nuclear Information System (INIS)

    Collins, John

    2009-01-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft(reg s ign) Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool's design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  5. NGNP Risk Management Database: A Model for Managing Risk

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2009-09-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft® Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool’s design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  6. Computerized nuclear material database management system for power reactors

    International Nuclear Information System (INIS)

    Cheng Binghao; Zhu Rongbao; Liu Daming; Cao Bin; Liu Ling; Tan Yajun; Jiang Jincai

    1994-01-01

    The software packages for nuclear material database management for power reactors are described. The database structure, data flow and model for management of the database are analysed. Also mentioned are the main functions and characterizations of the software packages, which are successfully installed and used at both the Daya Bay Nuclear Power Plant and the Qinshan Nuclear Power Plant for the purposed of handling nuclear material database automatically

  7. A database system for the management of severe accident risk information, SARD

    International Nuclear Information System (INIS)

    Ahn, K. I.; Kim, D. H.

    2003-01-01

    The purpose of this paper is to introduce main features and functions of a PC Windows-based database management system, SARD, which has been developed at Korea Atomic Energy Research Institute for automatic management and search of the severe accident risk information. Main functions of the present database system are implemented by three closely related, but distinctive modules: (1) fixing of an initial environment for data storage and retrieval, (2) automatic loading and management of accident information, and (3) automatic search and retrieval of accident information. For this, the present database system manipulates various form of the plant-specific severe accident risk information, such as dominant severe accident sequences identified from the plant-specific Level 2 Probabilistic Safety Assessment (PSA) and accident sequence-specific information obtained from the representative severe accident codes (e.g., base case and sensitivity analysis results, and summary for key plant responses). The present database system makes it possible to implement fast prediction and intelligent retrieval of the required severe accident risk information for various accident sequences, and in turn it can be used for the support of the Level 2 PSA of similar plants and for the development of plant-specific severe accident management strategies

  8. A database system for the management of severe accident risk information, SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, K. I.; Kim, D. H. [KAERI, Taejon (Korea, Republic of)

    2003-10-01

    The purpose of this paper is to introduce main features and functions of a PC Windows-based database management system, SARD, which has been developed at Korea Atomic Energy Research Institute for automatic management and search of the severe accident risk information. Main functions of the present database system are implemented by three closely related, but distinctive modules: (1) fixing of an initial environment for data storage and retrieval, (2) automatic loading and management of accident information, and (3) automatic search and retrieval of accident information. For this, the present database system manipulates various form of the plant-specific severe accident risk information, such as dominant severe accident sequences identified from the plant-specific Level 2 Probabilistic Safety Assessment (PSA) and accident sequence-specific information obtained from the representative severe accident codes (e.g., base case and sensitivity analysis results, and summary for key plant responses). The present database system makes it possible to implement fast prediction and intelligent retrieval of the required severe accident risk information for various accident sequences, and in turn it can be used for the support of the Level 2 PSA of similar plants and for the development of plant-specific severe accident management strategies.

  9. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  10. Research on reliability management systems for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Maki, Nobuo

    2000-01-01

    Investigation on a reliability management system for Nuclear Power Plants (NPPs) has been performed on national and international archived documents as well as on current status of studies at Idaho National Engineering and Environmental Laboratory (INEEL), US NPPs (McGuire, Seabrook), a French NPP (St. Laurent-des-Eaux), Japan Atomic Energy Research Institute (JAERI), Central Research Institute of Electric Power Industries (CRIEPI), and power plant manufacturers in Japan. As a result of the investigation, the following points were identified: (i) A reliability management system is composed of a maintenance management system to inclusively manage maintenance data, and an anomalies information and reliability data management system to extract data from maintenance results stored in the maintenance management system and construct a reliability database. (ii) The maintenance management system, which is widely-used among NPPs in the US and Europe, is an indispensable system for the increase of maintenance reliability. (iii) Maintenance management methods utilizing reliability data like Reliability Centered Maintenance are applied for NPP maintenance in the US and Europe, and contributing to cost saving. Maintenance templates are effective in the application process. In addition, the following points were proposed on the design of the system: (i) A detailed database on specifications of facilities and components is necessary for the effective use of the system. (ii) A demand database is indispensable for the application of the methods. (iii) Full-time database managers are important to maintain the quality of the reliability data. (author)

  11. Down syndrome: issues to consider in a national registry, research database and biobank.

    Science.gov (United States)

    McCabe, Linda L; McCabe, Edward R B

    2011-01-01

    As the quality of life for individuals with Down syndrome continues to improve due to anticipatory healthcare, early intervention, mainstreaming in schools, and increased expectations, the lack of basic information regarding individuals with Down syndrome is being recognized, and the need to facilitate research through a national registry, research database and biobank is being discussed. We believe that there should not be ownership of the samples and information, but instead prefer stewardship of the samples and information to benefit the participants who provided them. We endorse a model with data and sample managers and a research review board to interface between the investigators and participants. Information and samples would be coded, and only a few data managers would know the relationship between the codes and identifying information. Research results once published should be included in an online newsletter. If appropriate, individual results should be shared with participants. A Down syndrome registry, research database and biobank should be accountable to participants, families, medical care providers, government, and funding sources. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Growing dimensions. Spent fuel management at research reactors

    International Nuclear Information System (INIS)

    Ritchie, I.G.

    1998-01-01

    More than 550 nuclear research reactors are operating or shout down around the world. At many of these reactors, spent fuel from their operations is stored, pending decisions on its final disposition. In recent years, problems associated with this spent fuel storage have loomed larger in the international nuclear community. In efforts to determine the overall scope of problems and to develop a database on the subject, the IAEA has surveyed research reactor operators in its Member States. Information for the Research Reactor Spent Fuel Database (RRSFDB) so far has been obtained from a limited but representative number of research reactors. It supplements data already on hand in the Agency's more established Research Reactor Database (RRDB). Drawing upon these database resources, this article presents an overall picture of spent fuel management and storage at the world's research reactors, in the context of associated national and international programmes in the field

  13. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    Science.gov (United States)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  14. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  15. Status of research reactor spent fuel world-wide: Database summary

    International Nuclear Information System (INIS)

    Ritchie, I.G.

    1996-01-01

    Results complied in the research reactor spent fuel database are used to assess the status of research reactor spent fuel world-wide. Fuel assemblies, their types, enrichment, origin of enrichment and geological distribution among the industrialized and developed countries of the world are discussed. Fuel management practices in wet and dry storage facilities and the concerns of reactor operators about long-term storage of their spent fuel are presented and some of the activities carried out by the International Atomic Energy Agency to address the issues associated with research reactor spent fuel are outlined. (author). 4 refs, 17 figs, 4 tabs

  16. Application of cloud database in the management of clinical data of patients with skin diseases.

    Science.gov (United States)

    Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning

    2015-04-01

    To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.

  17. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  18. Database management system for large container inspection system

    International Nuclear Information System (INIS)

    Gao Wenhuan; Li Zheng; Kang Kejun; Song Binshan; Liu Fang

    1998-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the Customs to check the contents inside a large container without opening it. The author has discussed a database application system, as a part of Signal and Image System (SIS), for the LCIS. The basic requirements analysis was done first. Then the selections of computer hardware, operating system, and database management system were made according to the technology and market products circumstance. Based on the above considerations, a database application system with central management and distributed operation features has been implemented

  19. Management system of instrument database

    International Nuclear Information System (INIS)

    Zhang Xin

    1997-01-01

    The author introduces a management system of instrument database. This system has been developed using with Foxpro on network. The system has some characters such as clear structure, easy operation, flexible and convenient query, as well as the data safety and reliability

  20. The use of modern databases in managing nuclear material inventories

    International Nuclear Information System (INIS)

    Behrens, R.G.

    1994-01-01

    The need for a useful nuclear materials database to assist in the management of nuclear materials within the Department of Energy (DOE) Weapons Complex is becoming significantly more important as the mission of the DOE Complex changes and both international safeguards and storage issues become drivers in determining how these materials are managed. A well designed nuclear material inventory database can provide the Nuclear Materials Manager with an essential cost effective tool for timely analysis and reporting of inventories. This paper discusses the use of databases as a management tool to meet increasing requirements for accurate and timely information on nuclear material inventories and related information. From the end user perspective, this paper discusses the rationale, philosophy, and technical requirements for an integrated database to meet the needs for a variety of users such as those working in the areas of Safeguards, Materials Control and Accountability (MC ampersand A), Nuclear Materials Management, Waste Management, materials processing, packaging and inspection, and interim/long term storage

  1. A research on the enhancement of research management efficiency for the division of research, Korea cancer center hospital

    International Nuclear Information System (INIS)

    Lee, S. W.; Ma, K. H.; Kim, J. R.; Lee, D. C.; Lee, J. H.

    1999-06-01

    The research activities of Korea Cancer Center Hospital have increased for the past a few years just in proportion to the increase of research budget, but the assisting manpower of the office of research management has never been increased and the indications are that the internal and external circumstances will not allow the recruitment for a fairly long time. It has, therefore, become inevitable to enhance the work efficiency of the office by analyzing the administrative research assistance system, finding out problems and inefficiency factors, and suggesting possible answers to them. The office of research management and international cooperation has conducted this research to suggest possible ways to facilitate the administrative support for the research activities of Korea Cancer Center Hospital By analyzing the change of research budget, organization of the division of research and administrative support, manpower, and the administrative research supporting system of other institutes, we suggested possible ways to enhance the work efficiency for administrative research support and developed a relative database program. The research report will serve as a data for the organization of research support division when the Radiation Medicine Research Center is established. The database program has already been used for research budget management

  2. Correlates of Access to Business Research Databases

    Science.gov (United States)

    Gottfried, John C.

    2010-01-01

    This study examines potential correlates of business research database access through academic libraries serving top business programs in the United States. Results indicate that greater access to research databases is related to enrollment in graduate business programs, but not to overall enrollment or status as a public or private institution.…

  3. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  4. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  5. A framework for cross-observatory volcanological database management

    Science.gov (United States)

    Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio

    2017-04-01

    In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of

  6. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  7. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  8. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  9. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  10. A Philosophy Research Database to Share Data Resources

    Directory of Open Access Journals (Sweden)

    Jili Cheng

    2007-12-01

    Full Text Available Philosophy research used to rely mainly on the traditional published journals and newspapers for collecting or communicating data. However, because of financial limits or lack of capability to collect data, required published materials and even restricted materials and developing information from research projects often could not be obtained. The rise of digital techniques and Internet opportunities has allowed data resource sharing of philosophy research. However, although there are several ICPs with large-scale comprehensive commercial databases in the field in China, no real non-profit professional database for philosophy researchers exists. Therefore, in 2002, the Philosophy Institute of the Chinese Academy of Social Sciences began a project to build "The Database of Philosophy Research." Until Mar. 2006 the number of subsets had reached 30, with more than 30,000 records, retrieval services reached 6,000, and article-reading reached 30,000. Because of the concept of intellectual property, the service of the database is currently limited to the information held in CASS. Nevertheless, this is the first academic database for philosophy research, so its orientation is towards resource-sharing, leading users to data, and serving large number of demands from other provinces and departments.

  11. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  12. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    Science.gov (United States)

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  13. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  14. A Database Management Assessment Instrument

    Science.gov (United States)

    Landry, Jeffrey P.; Pardue, J. Harold; Daigle, Roy; Longenecker, Herbert E., Jr.

    2013-01-01

    This paper describes an instrument designed for assessing learning outcomes in data management. In addition to assessment of student learning and ABET outcomes, we have also found the instrument to be effective for determining database placement of incoming information systems (IS) graduate students. Each of these three uses is discussed in this…

  15. KALIMER design database development and operation manual

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment

  16. KALIMER design database development and operation manual

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment.

  17. Database searches for qualitative research

    OpenAIRE

    Evans, David

    2002-01-01

    Interest in the role of qualitative research in evidence-based health care is growing. However, the methods currently used to identify quantitative research do not translate easily to qualitative research. This paper highlights some of the difficulties during searches of electronic databases for qualitative research. These difficulties relate to the descriptive nature of the titles used in some qualitative studies, the variable information provided in abstracts, and the differences in the ind...

  18. Development of database management system for monitoring of radiation workers for actinides

    International Nuclear Information System (INIS)

    Kalyane, G.N.; Mishra, L.; Nadar, M.Y.; Singh, I.S.; Rao, D.D.

    2012-01-01

    Annually around 500 radiation workers are monitored for estimation of lung activities and internal dose due to Pu/Am and U from various divisions of Bhabha Atomic Research Centre (Trombay) and from PREFRE and A3F facilities (Tarapur) in lung counting laboratory located at Bhabha Atomic Research Centre hospital under Routine and Special monitoring program. A 20 cm diameter phoswich and an array of HPGe detector were used for this purpose. In case of positive contamination, workers are followed up and monitored using both the detection systems in different geometries. Management of this huge data becomes difficult and therefore an easily retrievable database system containing all the relevant data of the monitored radiation workers. Materials and methods: The database management system comprises of three main modules integrated together: 1) Apache server installed on a Windows (XP) platform (Apache version 2.2.17) 2) MySQL database management system (MySQL version 5.5.8) 3) PHP (Preformatted Hypertext) programming language (PHP version 5.3.5). All the 3 modules work together seamlessly as a single software program. The front end user interaction is through an user friendly and interactive local web page where internet connection is not required. This front page has hyperlinks to many other pages, which have different utilities for the user. The user has to log in using username and password. Results and Conclusions: Database management system is used for entering, updating and management of lung monitoring data of radiation workers, The program is having following utilities: bio-data entry of new subjects, editing of bio-data of old subjects (only one subject at a time), entry of counting data of that day's lung monitoring, retrieval of old records based on a number of parameters and filters like date of counting, employee number, division, counts fulfilling a given criterion, etc. and calculation of MEQ CWT (Muscle Equivalent Chest Wall Thickness), energy

  19. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  20. Database Quality and Access Issues Relevant to Research Using Anesthesia Information Management System Data.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin

    2018-07-01

    For this special article, we reviewed the computer code, used to extract the data, and the text of all 47 studies published between January 2006 and August 2017 using anesthesia information management system (AIMS) data from Thomas Jefferson University Hospital (TJUH). Data from this institution were used in the largest number (P = .0007) of papers describing the use of AIMS published in this time frame. The AIMS was replaced in April 2017, making this finite sample finite. The objective of the current article was to identify factors that made TJUH successful in publishing anesthesia informatics studies. We examined the structured query language used for each study to examine the extent to which databases outside of the AIMS were used. We examined data quality from the perspectives of completeness, correctness, concordance, plausibility, and currency. Our results were that most could not have been completed without external database sources (36/47, 76.6%; P = .0003 compared with 50%). The operating room management system was linked to the AIMS and was used significantly more frequently (26/36, 72%) than other external sources. Access to these external data sources was provided, allowing exploration of data quality. The TJUH AIMS used high-resolution timestamps (to the nearest 3 milliseconds) and created audit tables to track changes to clinical documentation. Automatic data were recorded at 1-minute intervals and were not editable; data cleaning occurred during analysis. Few paired events with an expected order were out of sequence. Although most data elements were of high quality, there were notable exceptions, such as frequent missing values for estimated blood loss, height, and weight. Some values were duplicated with different units, and others were stored in varying locations. Our conclusions are that linking the TJUH AIMS to the operating room management system was a critical step in enabling publication of multiple studies using AIMS data. Access to this and

  1. Kingfisher: a system for remote sensing image database management

    Science.gov (United States)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  2. International database on ageing management and life extension

    International Nuclear Information System (INIS)

    Ianko, L.; Lyssakov, V.; McLachlan, D.; Russell, J.; Mukhametshin, V.

    1995-01-01

    International database on ageing management and life extension for reactor pressure vessel materials (RPVM) is described with the emphasis on the following issues: requirements of the system; design concepts for RPVM database system; data collection, processing and storage; information retrieval and dissemination; RPVM information assessment and evaluation. 1 fig

  3. Nuclear data processing using a database management system

    International Nuclear Information System (INIS)

    Castilla, V.; Gonzalez, L.

    1991-01-01

    A database management system that permits the design of relational models was used to create an integrated database with experimental and evaluated nuclear data.A system that reduces the time and cost of processing was created for computers type EC or compatibles.A set of programs for the conversion from nuclear calculated data output format to EXFOR format was developed.A dictionary to perform a retrospective search in the ENDF database was created too

  4. Report of the SRC working party on databases and database management systems

    International Nuclear Information System (INIS)

    Crennell, K.M.

    1980-10-01

    An SRC working party, set up to consider the subject of support for databases within the SRC, were asked to identify interested individuals and user communities, establish which features of database management systems they felt were desirable, arrange demonstrations of possible systems and then make recommendations for systems, funding and likely manpower requirements. This report describes the activities and lists the recommendations of the working party and contains a list of databses maintained or proposed by those who replied to a questionnaire. (author)

  5. Managing Multiuser Database Buffers Using Data Mining Techniques

    NARCIS (Netherlands)

    Feng, L.; Lu, H.J.

    2004-01-01

    In this paper, we propose a data-mining-based approach to public buffer management for a multiuser database system, where database buffers are organized into two areas – public and private. While the private buffer areas contain pages to be updated by particular users, the public

  6. THE EVOLUTION OF RISK MANAGEMENT RESEARCH: CHANGES IN KNOWLEDGE MAPS

    OpenAIRE

    Iwona Gorzeń-Mitka

    2017-01-01

    One of the leading trends in modern academic research is risk management. Over the years, the approach to risk management has changed and affected many different areas. This study aims to investigate changes in risk management and trends of risk management in the past 20 years. Risk management related publications from 1990 to 2016 were retrieved from the Web of Science and Scopus databases. VOS viewer software was used to analyse the research trend. Literature growth related to risk manageme...

  7. A user's manual for managing database system of tensile property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kim, D. H.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the tensile database system for managing the tensile property test data. The data base constructed the data produced from tensile property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The tensile database system was developed by internet method using Java, PL/SQL, JSP(Java Server Pages) tool

  8. Research on the establishment of the database system for R and D on the innovative technology for the earth; Chikyu kankyo sangyo gijutsu kenkyu kaihatsuyo database system ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-03-01

    For the purpose of structuring a database system of technical information about the earth environmental issues, the `database system for R and D of the earth environmental industrial technology` was operationally evaluated, and study was made to open it and structure a prototype of database. In the present state as pointed out in the operational evaluation, the utilization frequency is not heightened due to lack of UNIX experience, absence of system managers and shortage of utilizable articles listed, so that the renewal of database does not ideally progress. Therefore, study was then made to introduce tools utilizable by the initiators and open the information access terminal to the researchers at headquarters utilizing the internet. In order for the earth environment-related researchers to easily obtain the information, a database was prototypically structured to support the research exchange. Tasks were made clear to be taken for selecting the fields of research and compiling common thesauri in Japanese, Western and other languages. 28 figs., 16 tabs.

  9. An Introduction to the DB Relational Database Management System

    OpenAIRE

    Ward, J.R.

    1982-01-01

    This paper is an introductory guide to using the Db programs to maintain and query a relational database on the UNIX operating system. In the past decade. increasing interest has been shown in the development of relational database management systems. Db is an attempt to incorporate a flexible and powerful relational database system within the user environment presented by the UNIX operating system. The family of Db programs is useful for maintaining a database of information that i...

  10. Development of Krsko Severe Accident Management Database (SAMD)

    International Nuclear Information System (INIS)

    Basic, I.; Kocnar, R.

    1996-01-01

    Severe Accident Management is a framework to identify and implement the Emergency Response Capabilities that can be used to prevent or mitigate severe accidents and their consequences. Krsko Severe Accident Management Database documents the severe accident management activities which are developed in the NPP Krsko, based on the Krsko IPE (Individual Plant Examination) insights and Generic WOG SAMGs (Westinghouse Owners Group Severe Accident Management Guidance). (author)

  11. Managing Database Services: An Approach Based in Information Technology Services Availabilty and Continuity Management

    Directory of Open Access Journals (Sweden)

    Leonardo Bastos Pontes

    2017-01-01

    Full Text Available This paper is held in the information technology services management environment, with a few ideas of information technology governance, and purposes to implement a hybrid model to manage the services of a database, based on the principles of information technology services management in a supplementary health operator. This approach utilizes fundamental nuances of services management guides, such as CMMI for Services, COBIT, ISO 20000, ITIL and MPS.BR for Services; it studies harmonically Availability and Continuity Management, as most part of the guides also do. This work has its importance because it keeps a good flow in the database and improves the agility of the systems in the accredited clinics in the health plan.

  12. SPIRE Data-Base Management System

    Science.gov (United States)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  13. Design of database management system for 60Co container inspection system

    International Nuclear Information System (INIS)

    Liu Jinhui; Wu Zhifang

    2007-01-01

    The function of the database management system has been designed according to the features of cobalt-60 container inspection system. And the software related to the function has been constructed. The database querying and searching are included in the software. The database operation program is constructed based on Microsoft SQL server and Visual C ++ under Windows 2000. The software realizes database querying, image and graph displaying, statistic, report form and its printing, interface designing, etc. The software is powerful and flexible for operation and information querying. And it has been successfully used in the real database management system of cobalt-60 container inspection system. (authors)

  14. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    Science.gov (United States)

    2002-01-01

    to the OODBMS approach. The ORDBMS approach produced such research prototypes as Postgres [155], and Starburst [67] and commercial products such as...Kemnitz. The POSTGRES Next-Generation Database Management System. Communications of the ACM, 34(10):78–92, 1991. [156] Michael Stonebreaker and Dorothy

  15. MonetDB: Two Decades of Research in Column-oriented Database Architectures

    NARCIS (Netherlands)

    S. Idreos (Stratos); F.E. Groffen (Fabian); N.J. Nes (Niels); S. Manegold (Stefan); K.S. Mullender (Sjoerd); M.L. Kersten (Martin)

    2012-01-01

    textabstractMonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management

  16. Moving to Google Cloud: Renovation of Global Borehole Temperature Database for Climate Research

    Science.gov (United States)

    Xiong, Y.; Huang, S.

    2013-12-01

    Borehole temperature comprises an independent archive of information on climate change which is complementary to the instrumental and other proxy climate records. With support from the international geothermal community, a global database of borehole temperatures has been constructed for the specific purpose of the study on climate change. Although this database has become an important data source in climate research, there are certain limitations partially because the framework of the existing borehole temperature database was hand-coded some twenty years ago. A database renovation work is now underway to take the advantages of the contemporary online database technologies. The major intended improvements include 1) dynamically linking a borehole site to Google Earth to allow for inspection of site specific geographical information; 2) dynamically linking an original key reference of a given borehole site to Google Scholar to allow for a complete list of related publications; and 3) enabling site selection and data download based on country, coordinate range, and contributor. There appears to be a good match between the enhancement requirements for this database and the functionalities of the newly released Google Fusion Tables application. Google Fusion Tables is a cloud-based service for data management, integration, and visualization. This experimental application can consolidate related online resources such as Google Earth, Google Scholar, and Google Drive for sharing and enriching an online database. It is user friendly, allowing users to apply filters and to further explore the internet for additional information regarding the selected data. The users also have ways to map, to chart, and to calculate on the selected data, and to download just the subset needed. The figure below is a snapshot of the database currently under Google Fusion Tables renovation. We invite contribution and feedback from the geothermal and climate research community to make the

  17. Improving Care And Research Electronic Data Trust Antwerp (iCAREdata): a research database of linked data on out-of-hours primary care.

    Science.gov (United States)

    Colliers, Annelies; Bartholomeeusen, Stefaan; Remmen, Roy; Coenen, Samuel; Michiels, Barbara; Bastiaens, Hilde; Van Royen, Paul; Verhoeven, Veronique; Holmgren, Philip; De Ruyck, Bernard; Philips, Hilde

    2016-05-04

    Primary out-of-hours care is developing throughout Europe. High-quality databases with linked data from primary health services can help to improve research and future health services. In 2014, a central clinical research database infrastructure was established (iCAREdata: Improving Care And Research Electronic Data Trust Antwerp, www.icaredata.eu ) for primary and interdisciplinary health care at the University of Antwerp, linking data from General Practice Cooperatives, Emergency Departments and Pharmacies during out-of-hours care. Medical data are pseudonymised using the services of a Trusted Third Party, which encodes private information about patients and physicians before data is sent to iCAREdata. iCAREdata provides many new research opportunities in the fields of clinical epidemiology, health care management and quality of care. A key aspect will be to ensure the quality of data registration by all health care providers. This article describes the establishment of a research database and the possibilities of linking data from different primary out-of-hours care providers, with the potential to help to improve research and the quality of health care services.

  18. Development of a computational database for probabilistic safety assessment of nuclear research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, Vagner S.; Oliveira, Patricia S. Pagetti de; Andrade, Delvonei Alves de, E-mail: vagner.macedo@usp.br, E-mail: patricia@ipen.br, E-mail: delvonei@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The objective of this work is to describe the database being developed at IPEN - CNEN / SP for application in the Probabilistic Safety Assessment of nuclear research reactors. The database can be accessed by means of a computational program installed in the corporate computer network, named IPEN Intranet, and this access will be allowed only to professionals previously registered. Data updating, editing and searching tasks will be controlled by a system administrator according to IPEN Intranet security rules. The logical model and the physical structure of the database can be represented by an Entity Relationship Model, which is based on the operational routines performed by IPEN - CNEN / SP users. The web application designed for the management of the database is named PSADB. It is being developed with MySQL database software and PHP programming language is being used. Data stored in this database are divided into modules that refer to technical specifications, operating history, maintenance history and failure events associated with the main components of the nuclear facilities. (author)

  19. Development of a computational database for probabilistic safety assessment of nuclear research reactors

    International Nuclear Information System (INIS)

    Macedo, Vagner S.; Oliveira, Patricia S. Pagetti de; Andrade, Delvonei Alves de

    2015-01-01

    The objective of this work is to describe the database being developed at IPEN - CNEN / SP for application in the Probabilistic Safety Assessment of nuclear research reactors. The database can be accessed by means of a computational program installed in the corporate computer network, named IPEN Intranet, and this access will be allowed only to professionals previously registered. Data updating, editing and searching tasks will be controlled by a system administrator according to IPEN Intranet security rules. The logical model and the physical structure of the database can be represented by an Entity Relationship Model, which is based on the operational routines performed by IPEN - CNEN / SP users. The web application designed for the management of the database is named PSADB. It is being developed with MySQL database software and PHP programming language is being used. Data stored in this database are divided into modules that refer to technical specifications, operating history, maintenance history and failure events associated with the main components of the nuclear facilities. (author)

  20. Nuclear database management systems

    International Nuclear Information System (INIS)

    Stone, C.; Sutton, R.

    1996-01-01

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  1. Features of TMR for a Successful Clinical and Research Database

    OpenAIRE

    Pryor, David B.; Stead, William W.; Hammond, W. Edward; Califf, Robert M.; Rosati, Robert A.

    1982-01-01

    A database can be used for clinical practice and for research. The design of the database is important if both uses are to succeed. A clinical database must be efficient and flexible. A research database requires consistent observations recorded in a format which permits complete recall of the experience. In addition, the database should be designed to distinguish between missing data and negative responses, and to minimize transcription errors during the recording process.

  2. A lake-centric geospatial database to guide research and inform management decisions in an Arctic watershed in northern Alaska experiencing climate and land-use changes

    Science.gov (United States)

    Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido

    2017-01-01

    Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.

  3. TRENDS: The aeronautical post-test database management system

    Science.gov (United States)

    Bjorkman, W. S.; Bondi, M. J.

    1990-01-01

    TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.

  4. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  5. Database Support for Research in Public Administration

    Science.gov (United States)

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  6. Protocol for developing a Database of Zoonotic disease Research in India (DoZooRI).

    Science.gov (United States)

    Chatterjee, Pranab; Bhaumik, Soumyadeep; Chauhan, Abhimanyu Singh; Kakkar, Manish

    2017-12-10

    Zoonotic and emerging infectious diseases (EIDs) represent a public health threat that has been acknowledged only recently although they have been on the rise for the past several decades. On an average, every year since the Second World War, one pathogen has emerged or re-emerged on a global scale. Low/middle-income countries such as India bear a significant burden of zoonotic and EIDs. We propose that the creation of a database of published, peer-reviewed research will open up avenues for evidence-based policymaking for targeted prevention and control of zoonoses. A large-scale systematic mapping of the published peer-reviewed research conducted in India will be undertaken. All published research will be included in the database, without any prejudice for quality screening, to broaden the scope of included studies. Structured search strategies will be developed for priority zoonotic diseases (leptospirosis, rabies, anthrax, brucellosis, cysticercosis, salmonellosis, bovine tuberculosis, Japanese encephalitis and rickettsial infections), and multiple databases will be searched for studies conducted in India. The database will be managed and hosted on a cloud-based platform called Rayyan. Individual studies will be tagged based on key preidentified parameters (disease, study design, study type, location, randomisation status and interventions, host involvement and others, as applicable). The database will incorporate already published studies, obviating the need for additional ethical clearances. The database will be made available online, and in collaboration with multisectoral teams, domains of enquiries will be identified and subsequent research questions will be raised. The database will be queried for these and resulting evidence will be analysed and published in peer-reviewed journals. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise

  7. Computerized database management system for breast cancer patients.

    Science.gov (United States)

    Sim, Kok Swee; Chong, Sze Siang; Tso, Chih Ping; Nia, Mohsen Esmaeili; Chong, Aun Kee; Abbas, Siti Fathimah

    2014-01-01

    Data analysis based on breast cancer risk factors such as age, race, breastfeeding, hormone replacement therapy, family history, and obesity was conducted on breast cancer patients using a new enhanced computerized database management system. My Structural Query Language (MySQL) is selected as the application for database management system to store the patient data collected from hospitals in Malaysia. An automatic calculation tool is embedded in this system to assist the data analysis. The results are plotted automatically and a user-friendly graphical user interface is developed that can control the MySQL database. Case studies show breast cancer incidence rate is highest among Malay women, followed by Chinese and Indian. The peak age for breast cancer incidence is from 50 to 59 years old. Results suggest that the chance of developing breast cancer is increased in older women, and reduced with breastfeeding practice. The weight status might affect the breast cancer risk differently. Additional studies are needed to confirm these findings.

  8. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  9. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    Science.gov (United States)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  10. Nuclear power plant reliability database management

    International Nuclear Information System (INIS)

    Meslin, Th.; Aufort, P.

    1996-04-01

    In the framework of the development of a probabilistic safety project on site (notion of living PSA), Saint Laurent des Eaux NPP implements a specific EDF reliability database. The main goals of this project at Saint Laurent des Eaux are: to expand risk analysis and to constitute an effective local basis of thinking about operating safety by requiring the participation of all departments of a power plant: analysis of all potential operating transients, unavailability consequences... that means to go further than a simple culture of applying operating rules; to involve nuclear power plant operators in experience feedback and its analysis, especially by following up behaviour of components and of safety functions; to allow plant safety managers to outline their decisions facing safety authorities for notwithstanding, preventive maintenance programme, operating incident evaluation. To hit these goals requires feedback data, tools, techniques and development of skills. The first step is to obtain specific reliability data on the site. Raw data come from plant maintenance management system which processes all maintenance activities and keeps in memory all the records of component failures and maintenance activities. Plant specific reliability data are estimated with a Bayesian model which combines these validated raw data with corporate generic data. This approach allow to provide reliability data for main components modelled in PSA, to check the consistency of the maintenance program (RCM), to verify hypothesis made at the design about component reliability. A number of studies, related to components reliability as well as decision making process of specific incident risk evaluation have been carried out. This paper provides also an overview of the process management set up on site from raw database to specific reliability database in compliance with established corporate objectives. (authors). 4 figs

  11. Nuclear power plant reliability database management

    Energy Technology Data Exchange (ETDEWEB)

    Meslin, Th [Electricite de France (EDF), 41 - Saint-Laurent-des-Eaux (France); Aufort, P

    1996-04-01

    In the framework of the development of a probabilistic safety project on site (notion of living PSA), Saint Laurent des Eaux NPP implements a specific EDF reliability database. The main goals of this project at Saint Laurent des Eaux are: to expand risk analysis and to constitute an effective local basis of thinking about operating safety by requiring the participation of all departments of a power plant: analysis of all potential operating transients, unavailability consequences... that means to go further than a simple culture of applying operating rules; to involve nuclear power plant operators in experience feedback and its analysis, especially by following up behaviour of components and of safety functions; to allow plant safety managers to outline their decisions facing safety authorities for notwithstanding, preventive maintenance programme, operating incident evaluation. To hit these goals requires feedback data, tools, techniques and development of skills. The first step is to obtain specific reliability data on the site. Raw data come from plant maintenance management system which processes all maintenance activities and keeps in memory all the records of component failures and maintenance activities. Plant specific reliability data are estimated with a Bayesian model which combines these validated raw data with corporate generic data. This approach allow to provide reliability data for main components modelled in PSA, to check the consistency of the maintenance program (RCM), to verify hypothesis made at the design about component reliability. A number of studies, related to components reliability as well as decision making process of specific incident risk evaluation have been carried out. This paper provides also an overview of the process management set up on site from raw database to specific reliability database in compliance with established corporate objectives. (authors). 4 figs.

  12. Atlantic Canada's energy research and development website and database

    International Nuclear Information System (INIS)

    2005-01-01

    Petroleum Research Atlantic Canada maintains a website devoted to energy research and development in Atlantic Canada. The site can be viewed on the world wide web at www.energyresearch.ca. It includes a searchable database with information about researchers in Nova Scotia, their projects and published materials on issues related to hydrocarbons, alternative energy technologies, energy efficiency, climate change, environmental impacts and policy. The website also includes links to research funding agencies, external related databases and related energy organizations around the world. Nova Scotia-based users are invited to submit their academic, private or public research to the site. Before being uploaded into the database, a site administrator reviews and processes all new information. Users are asked to identify their areas of interest according to the following research categories: alternative or renewable energy technologies; climate change; coal; computer applications; economics; energy efficiency; environmental impacts; geology; geomatics; geophysics; health and safety; human factors; hydrocarbons; meteorology and oceanology (metocean) activities; petroleum operations in deep and shallow waters; policy; and power generation and supply. The database can be searched 5 ways according to topic, researchers, publication, projects or funding agency. refs., tabs., figs

  13. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    Science.gov (United States)

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  14. A user's manual for the database management system of impact property

    International Nuclear Information System (INIS)

    Ryu, Woo Seok; Park, S. J.; Kong, W. S.; Jun, I.

    2003-06-01

    This manual is written for the management and maintenance of the impact database system for managing the impact property test data. The data base constructed the data produced from impact property test can increase the application of test results. Also, we can get easily the basic data from database when we prepare the new experiment and can produce better result by compare the previous data. To develop the database we must analyze and design carefully application and after that, we can offer the best quality to customers various requirements. The impact database system was developed by internet method using jsp(Java Server pages) tool

  15. Clinical Databases for Chest Physicians.

    Science.gov (United States)

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  16. Managing XML Data to optimize Performance into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-06-01

    Full Text Available This paper propose some possibilities for manage XML data in order to optimize performance into object-relational databases. It is detailed the possibility of storing XML data into such databases, using for exemplification an Oracle database and there are tested some optimizing techniques of the queries over XMLType tables, like indexing and partitioning tables.

  17. Using Online Databases in Corporate Issues Management.

    Science.gov (United States)

    Thomsen, Steven R.

    1995-01-01

    Finds that corporate public relations practitioners felt they were able, using online database and information services, to intercept issues earlier in the "issue cycle" and thus enable their organizations to develop more "proactionary" or "catalytic" issues management repose strategies. (SR)

  18. National Database for Autism Research (NDAR)

    Data.gov (United States)

    U.S. Department of Health & Human Services — National Database for Autism Research (NDAR) is an extensible, scalable informatics platform for austism spectrum disorder-relevant data at all levels of biological...

  19. Influenza research database: an integrated bioinformatics resource for influenza virus research

    Science.gov (United States)

    The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics, an...

  20. Outcomes research in amyotrophic lateral sclerosis: lessons learned from the amyotrophic lateral sclerosis clinical assessment, research, and education database.

    Science.gov (United States)

    Miller, Robert G; Anderson, Fred; Brooks, Benjamin Rix; Mitsumoto, Hiroshi; Bradley, Walter G; Ringel, Steven P

    2009-01-01

    To examine the care of patients with ALS following the publication of the standardized recommendations for the management of patients with amyotrophic lateral sclerosis (ALS) published in 1999 by the American Academy of Neurology. Specific aspects of ALS patient management have been evaluated serially using a national Amyotrophic Lateral Sclerosis Clinical Assessment, Research, and Education (ALS CARE) database to encourage compliance with these recommendations and to assure continuing quality improvement. The most recent analysis of 5,600 patients shows interesting epidemiological observations and treatment trends. Proper management of many ALS symptoms has increased substantially since the first publication of the guidelines, and awareness of pseudobulbar affect has increased. Other recommendations are underutilized: Only 9% undergo percutaneous endoscopic gastrostomy, although this procedure was recommended in 22% of patients; and noninvasive positive pressure ventilation was used by only 21% of patients despite being associated with improved 5-year survival rates. This observational database has been a useful tool in monitoring compliance with the standard of care for patients with ALS and may have resulted in greater adherence to guidelines.

  1. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    Science.gov (United States)

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  2. Geoscientific (GEO) database of the Andra Meuse / Haute-Marne research center

    International Nuclear Information System (INIS)

    Tabani, P.; Hemet, P.; Hermand, G.; Delay, J.; Auriere, C.

    2010-01-01

    Document available in extended abstract form only. The GEO database (geo-scientific database of the Meuse/Haute-Marne Center) is a tool developed by Andra, with a view to group in a secured computer form all data related to the acquisition of in situ and laboratory measurements made on solid and fluid samples. This database has three main functions: - Acquisition and management of data and computer files related to geological, geomechanical, hydrogeological and geochemical measurements on solid and fluid samples and in situ measurements (logging, on sample measurements, geological logs, etc). - Available consultation by the staff on Andra's intranet network for selective viewing of data linked to a borehole and/or a sample and for making computations and graphs on sets of laboratory measurements related to a sample. - Physical management of fluid and solid samples stored in a 'core library' in order to localize a sample, follow-up its movement out of the 'core library' to an organization, and carry out regular inventories. The GEO database is a relational Oracle data base. It is installed on a data server which stores information and manages the users' transactions. The users can consult, download and exploit data from any computer connected to the Andra network or Internet. Management of the access rights is made through a login/ password. Four geo-scientific explanations are linked to the Geo database, they are: - The Geosciences portal: The Geosciences portal is a web Intranet application accessible from the ANDRA network. It does not require a particular installation from the client and is accessible through the Internet navigator. A SQL Server Express database manages the users and access rights to the application. This application is used for the acquisition of hydrogeological and geochemical data collected on the field and on fluid samples, as well as data related to scientific work carried out at surface level or in drifts

  3. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  4. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    Science.gov (United States)

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    Science.gov (United States)

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  6. Human health risk assessment database, 'the NHSRC toxicity value database': Supporting the risk assessment process at US EPA's National Homeland Security Research Center

    International Nuclear Information System (INIS)

    Moudgal, Chandrika J.; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-01-01

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007

  7. Toward public volume database management: a case study of NOVA, the National Online Volumetric Archive

    Science.gov (United States)

    Fletcher, Alex; Yoo, Terry S.

    2004-04-01

    Public databases today can be constructed with a wide variety of authoring and management structures. The widespread appeal of Internet search engines suggests that public information be made open and available to common search strategies, making accessible information that would otherwise be hidden by the infrastructure and software interfaces of a traditional database management system. We present the construction and organizational details for managing NOVA, the National Online Volumetric Archive. As an archival effort of the Visible Human Project for supporting medical visualization research, archiving 3D multimodal radiological teaching files, and enhancing medical education with volumetric data, our overall database structure is simplified; archives grow by accruing information, but seldom have to modify, delete, or overwrite stored records. NOVA is being constructed and populated so that it is transparent to the Internet; that is, much of its internal structure is mirrored in HTML allowing internet search engines to investigate, catalog, and link directly to the deep relational structure of the collection index. The key organizational concept for NOVA is the Image Content Group (ICG), an indexing strategy for cataloging incoming data as a set structure rather than by keyword management. These groups are managed through a series of XML files and authoring scripts. We cover the motivation for Image Content Groups, their overall construction, authorship, and management in XML, and the pilot results for creating public data repositories using this strategy.

  8. Customized laboratory information management system for a clinical and research leukemia cytogenetics laboratory.

    Science.gov (United States)

    Bakshi, Sonal R; Shukla, Shilin N; Shah, Pankaj M

    2009-01-01

    We developed a Microsoft Access-based laboratory management system to facilitate database management of leukemia patients referred for cytogenetic tests in regards to karyotyping and fluorescence in situ hybridization (FISH). The database is custom-made for entry of patient data, clinical details, sample details, cytogenetics test results, and data mining for various ongoing research areas. A number of clinical research laboratoryrelated tasks are carried out faster using specific "queries." The tasks include tracking clinical progression of a particular patient for multiple visits, treatment response, morphological and cytogenetics response, survival time, automatic grouping of patient inclusion criteria in a research project, tracking various processing steps of samples, turn-around time, and revenue generated. Since 2005 we have collected of over 5,000 samples. The database is easily updated and is being adapted for various data maintenance and mining needs.

  9. Development of the ageing management database of PUSPATI TRIGA reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ramli, Nurhayati, E-mail: nurhayati@nm.gov.my; Tom, Phongsakorn Prak; Husain, Nurfazila; Farid, Mohd Fairus Abd; Ramli, Shaharum [Reactor Technology Centre, Malaysian Nuclear Agency, MOSTI, Bangi, 43000 Kajang, Selangor (Malaysia); Maskin, Mazleha [Science Program, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, Selangor (Malaysia); Adnan, Amirul Syazwan; Abidin, Nurul Husna Zainal [Faculty of Petroleum and Renewable Energy Engineering, Universiti Teknologi Malaysia (Malaysia)

    2016-01-22

    Since its first criticality in 1982, PUSPATI TRIGA Reactor (RTP) has been operated for more than 30 years. As RTP become older, ageing problems have been seen to be the prominent issues. In addressing the ageing issues, an Ageing Management (AgeM) database for managing related ageing matters was systematically developed. This paper presents the development of AgeM database taking into account all RTP major Systems, Structures and Components (SSCs) and ageing mechanism of these SSCs through the system surveillance program.

  10. Land, Oil Spill, and Waste Management Research Publications in the Science Inventory

    Science.gov (United States)

    Resources from the Science Inventory database of EPA's Office of Research and Development, as well as EPA's Science Matters journal, include research on managing contaminated sites and ground water modeling and decontamination technologies.

  11. Obstetrical ultrasound data-base management system by using personal computer

    International Nuclear Information System (INIS)

    Jeon, Hae Jeong; Park, Jeong Hee; Kim, Soo Nyung

    1993-01-01

    A computer program which performs obstetric calculations on Clipper Language using the data from ultrasonography was developed for personal computer. It was designed for fast assessment of fetal development, prediction of gestational age, and weight from ultrasonographic measurements which included biparietal diameter, femur length, gestational sac, occipito-frontal diameter, abdominal diameter, and etc. The Obstetrical-Ultrasound Data-Base Management System was tested for its performance. The Obstetrical-Ultrasound Data-Base Management System was very useful in patient management with its convenient data filing, easy retrieval of previous report, prompt but accurate estimation of fetal growth and skeletal anomaly and production of equation and growth curve for pregnant women

  12. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  13. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador.

    Science.gov (United States)

    Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research.

  14. The Vocational Guidance Research Database: A Scientometric Approach

    Science.gov (United States)

    Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel

    2012-01-01

    The scientometric study of scientific output through publications in specialized journals cannot be undertaken exclusively with the databases available today. For this reason, the objective of this article is to introduce the "Base de Datos de Investigacion en Orientacion Vocacional" [Vocational Guidance Research Database], based on the…

  15. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database

    Science.gov (United States)

    2017-01-01

    MILITARY PERSONNEL DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database Report to...to DSAID’s system speed and ease of use; interfaces with MCIO databases ; utility as a case management tool; and users’ ability to query data and... Managing Its Sexual Assault Incident Database What GAO Found As of October 2013, the Department of Defense’s (DOD) Defense Sexual Assault Incident

  16. System factors influencing utilisation of Research4Life databases by ...

    African Journals Online (AJOL)

    This is a comprehensive investigation of the influence of system factors on utilisation of Research4Life databases. It is part of a doctoral dissertation. Research4Life databases are new innovative technologies being investigated in a new context – utilisation by NARIs scientists for research. The study adopted the descriptive ...

  17. A new Volcanic managEment Risk Database desIgn (VERDI): Application to El Hierro Island (Canary Islands)

    Science.gov (United States)

    Bartolini, S.; Becerril, L.; Martí, J.

    2014-11-01

    One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.

  18. Using a database to manage resolution of comments on standards

    International Nuclear Information System (INIS)

    Holloran, R.W.; Kelley, R.P.

    1995-01-01

    Features of production systems that would enhance development and implementation of procedures and other standards were first suggested in 1988 described how a database could provide the features sought for managing the content of structured documents such as standards and procedures. This paper describes enhancements of the database that manage the more complex links associated with resolution of comments. Displaying the linked information on a computer display aids comment resolvers. A hardcopy report generated by the database permits others to independently evaluate the resolution of comments in context with the original text of the standard, the comment, and the revised text of the standard. Because the links are maintained by the database, consistency between the agreed-upon resolutions and the text of the standard can be maintained throughout the subsequent reviews of the standard. Each of the links is bidirectional; i.e., the relationships between any two documents can be viewed from the perspective of either document

  19. Guide to Research Databases at IDRC

    International Development Research Centre (IDRC) Digital Library (Canada)

    Mélanie Brunet

    sponsibility of each user to ensure that he or she uses ... a collection of documents and research outputs generated as part of projects ... Although the commercial databases also have a French or Spanish interface, the content is mainly in En-.

  20. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  1. An Institutional Approach to Developing Research Data Management Infrastructure

    Directory of Open Access Journals (Sweden)

    James A. J. Wilson

    2011-10-01

    Full Text Available This article outlines the work that the University of Oxford is undertaking to implement a coordinated data management infrastructure. The rationale for the approach being taken by Oxford is presented, with particular attention paid to the role of each service division. This is followed by a consideration of the relative advantages and disadvantages of institutional data repositories, as opposed to national or international data centres. The article then focuses on two ongoing JISC-funded projects, ‘Embedding Institutional Data Curation Services in Research’ (Eidcsr and ‘Supporting Data Management Infrastructure for the Humanities’ (Sudamih. Both projects are intra-institutional collaborations and involve working with researchers to develop particular aspects of infrastructure, including: University policy, systems for the preservation and documentation of research data, training and support, software tools for the visualisation of large images, and creating and sharing databases via the Web (Database as a Service.

  2. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  3. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  4. Development of a Relational Database for Learning Management Systems

    Science.gov (United States)

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  5. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  6. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  7. The Cocoa Shop: A Database Management Case

    Science.gov (United States)

    Pratt, Renée M. E.; Smatt, Cindi T.

    2015-01-01

    This is an example of a real-world applicable case study, which includes background information on a small local business (i.e., TCS), description of functional business requirements, and sample data. Students are asked to design and develop a database to improve the management of the company's customers, products, and purchases by emphasizing…

  8. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    Science.gov (United States)

    Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID

  9. Design and implementation of component reliability database management system for NPP

    International Nuclear Information System (INIS)

    Kim, S. H.; Jung, J. K.; Choi, S. Y.; Lee, Y. H.; Han, S. H.

    1999-01-01

    KAERI is constructing the component reliability database for Korean nuclear power plant. This paper describes the development of data management tool, which runs for component reliability database. This is running under intranet environment and is used to analyze the failure mode and failure severity to compute the component failure rate. Now we are developing the additional modules to manage operation history, test history and algorithms for calculation of component failure history and reliability

  10. The research of network database security technology based on web service

    Science.gov (United States)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  11. USING THE INTERNATIONAL SCIENTOMETRIC DATABASES OF OPEN ACCESS IN SCIENTIFIC RESEARCH

    Directory of Open Access Journals (Sweden)

    O. Galchevska

    2015-05-01

    Full Text Available In the article the problem of the use of international scientometric databases in research activities as web-oriented resources and services that are the means of publication and dissemination of research results is considered. Selection criteria of scientometric platforms of open access in conducting scientific researches (coverage Ukrainian scientific periodicals and publications, data accuracy, general characteristics of international scientometrics database, technical, functional characteristics and their indexes are emphasized. The review of the most popular scientometric databases of open access Google Scholar, Russian Scientific Citation Index (RSCI, Scholarometer, Index Copernicus (IC, Microsoft Academic Search is made. Advantages of usage of International Scientometrics database Google Scholar in conducting scientific researches and prospects of research that are in the separation of cloud information and analytical services of the system are determined.

  12. INIST: databases reorientation

    International Nuclear Information System (INIS)

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  13. Research on spatio-temporal database techniques for spatial information service

    Science.gov (United States)

    Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan

    2007-06-01

    Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).

  14. A DICOM based radiotherapy plan database for research collaboration and reporting

    International Nuclear Information System (INIS)

    Westberg, J; Krogh, S; Brink, C; Vogelius, I R

    2014-01-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  15. A DICOM based radiotherapy plan database for research collaboration and reporting

    Science.gov (United States)

    Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.

    2014-03-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  16. National information network and database system of hazardous waste management in China

    Energy Technology Data Exchange (ETDEWEB)

    Ma Hongchang [National Environmental Protection Agency, Beijing (China)

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry, and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.

  17. SeedStor: A Germplasm Information Management System and Public Database.

    Science.gov (United States)

    Horler, R S P; Turner, A S; Fretter, P; Ambrose, M

    2018-01-01

    SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK's plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. © The Author(s) 2017. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.

  18. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    Directory of Open Access Journals (Sweden)

    Yu-Chun Chen

    Full Text Available BACKGROUND: Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. METHODOLOGY AND FINDINGS: A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors contributed nearly half (47.9% of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. CONCLUSIONS: A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  19. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  20. Fine-grained policy control in U.S. Army Research Laboratory (ARL) multimodal signatures database

    Science.gov (United States)

    Bennett, Kelly; Grueneberg, Keith; Wood, David; Calo, Seraphin

    2014-06-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) consists of a number of colocated relational databases representing a collection of data from various sensors. Role-based access to this data is granted to external organizations such as DoD contractors and other government agencies through a client Web portal. In the current MMSDB system, access control is only at the database and firewall level. In order to offer finer grained security, changes to existing user profile schemas and authentication mechanisms are usually needed. In this paper, we describe a software middleware architecture and implementation that allows fine-grained access control to the MMSDB at a dataset, table, and row level. Result sets from MMSDB queries issued in the client portal are filtered with the use of a policy enforcement proxy, with minimal changes to the existing client software and database. Before resulting data is returned to the client, policies are evaluated to determine if the user or role is authorized to access the data. Policies can be authored to filter data at the row, table or column level of a result set. The system uses various technologies developed in the International Technology Alliance in Network and Information Science (ITA) for policy-controlled information sharing and dissemination1. Use of the Policy Management Library provides a mechanism for the management and evaluation of policies to support finer grained access to the data in the MMSDB system. The GaianDB is a policy-enabled, federated database that acts as a proxy between the client application and the MMSDB system.

  1. Intra-disciplinary differences in database coverage and the consequences for bibliometric research

    DEFF Research Database (Denmark)

    Faber Frandsen, Tove; Nicolaisen, Jeppe

    2008-01-01

    Bibliographic databases (including databases based on open access) are routinely used for bibliometric research. The value of a specific database depends to a large extent on the coverage of the discipline(s) under study. A number of studies have determined the coverage of databases in specific d...... and psychology). The point extends to include both the uneven coverage of specialties and research traditions. The implications for bibliometric research are discussed, and precautions which need to be taken are outlined. ...

  2. METODE RESET PASSWORD LEVEL ROOT PADA RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS MySQL

    Directory of Open Access Journals (Sweden)

    Taqwa Hariguna

    2011-08-01

    Full Text Available Database merupakan sebuah hal yang penting untuk menyimpan data, dengan database organisasi akan mendapatkan keuntungan dalam beberapa hal, seperti kecepatan akases dan mengurangi penggunaan kertas, namun dengan implementasi database tidak jarang administrator database lupa akan password yang digunakan, hal ini akan mempersulit dalam proses penangganan database. Penelitian ini bertujuan untuk menggali cara mereset password level root pada relational database management system MySQL.

  3. The Erasmus insurance case and a related questionnaire for distributed database management systems

    NARCIS (Netherlands)

    S.C. van der Made-Potuijt

    1990-01-01

    textabstractThis is the third report concerning transaction management in the database environment. In the first report the role of the transaction manager in protecting the integrity of a database has been studied [van der Made-Potuijt 1989]. In the second report a model has been given for a

  4. Enhanced DIII-D Data Management Through a Relational Database

    Science.gov (United States)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  5. Preliminary Study on Management of Agricultural Scientific Research Projects in the New Situation

    Institute of Scientific and Technical Information of China (English)

    Haiyan LUO; Qingqun YAO; Lizhen CHEN; Yu ZHENG

    2015-01-01

    Project management of agricultural scientific research institutions is an important section of agricultural scientific research plan management. It is of great significance for sustainable development of scientific research work of scientific research institutions. According to a series of opinions and notices about scientific and technological system reform issued by the state,and combining current situations of management of scientific research projects in scientific research institutions,this paper made a preliminary study on management of agricultural scientific research projects in the new trend. Finally,on the basis of the current situations of management of agricultural scientific research projects,it came up with pertinent recommendations,including strengthening communication and cooperation and actively declaring projects,strengthening preliminary planning of projects and establishing project information database,reinforcing project process management,ensuring on-time and high quality completion of projects,and strengthening learning and improving quality of management personnel.

  6. Customizable Electronic Laboratory Online (CELO): A Web-based Data Management System Builder for Biomedical Research Laboratories

    Science.gov (United States)

    Fong, Christine; Brinkley, James F.

    2006-01-01

    A common challenge among today’s biomedical research labs is managing growing amounts of research data. In order to reduce the time and resource costs of building data management tools, we designed the Customizable Electronic Laboratory Online (CELO) system. CELO automatically creates a generic database and web interface for laboratories that submit a simple web registration form. Laboratories can then use a collection of predefined XML templates to assist with the design of a database schema. Users can immediately utilize the web-based system to query data, manage multimedia files, and securely share data remotely over the internet. PMID:17238541

  7. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  8. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    Science.gov (United States)

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  9. Biomedical databases: protecting privacy and promoting research.

    Science.gov (United States)

    Wylie, Jean E; Mineau, Geraldine P

    2003-03-01

    When combined with medical information, large electronic databases of information that identify individuals provide superlative resources for genetic, epidemiology and other biomedical research. Such research resources increasingly need to balance the protection of privacy and confidentiality with the promotion of research. Models that do not allow the use of such individual-identifying information constrain research; models that involve commercial interests raise concerns about what type of access is acceptable. Researchers, individuals representing the public interest and those developing regulatory guidelines must be involved in an ongoing dialogue to identify practical models.

  10. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    Science.gov (United States)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  11. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  12. Relational databases for SSC design and control

    International Nuclear Information System (INIS)

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  13. CALCOM Database for managing California Commercial Groundfish sample data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The CALCOM database is used by the California Cooperative Groundfish Survey to store and manage Commercial market sample data. This data is ultimately used to...

  14. Fusion research and technology records in INIS database

    International Nuclear Information System (INIS)

    Hillebrand, C.D.

    1998-01-01

    This article is a summary of a survey study ''''A survey on publications in Fusion Research and Technology. Science and Technology Indicators in Fusion R and T'''' by the same author on Fusion R and T records in the International Nuclear Information System (INIS) bibliographic database. In that study, for the first time, all scientometric and bibliometric information contained in a bibliographic database, using INIS records, is analyzed and quantified, specific to a selected field of science and technology. A variety of new science and technology indicators which can be used for evaluating research and development activities is also presented in that study that study

  15. The use of database management systems in particle physics

    CERN Document Server

    Stevens, P H; Read, B J; Rittenberg, Alan

    1979-01-01

    Examines data-handling needs and problems in particle physics and looks at three very different efforts by the Particle Data Group (PDG) , the CERN-HERA Group in Geneva, and groups cooperating with ZAED in Germany at resolving these problems. The ZAED effort does not use a database management system (DBMS), the CERN-HERA Group uses an existing, limited capability DBMS, and PDG uses the Berkely Database Management (BDMS), which PDG itself designed and implemented with scientific data-handling needs in mind. The range of problems each group tried to resolve was influenced by whether or not a DBMS was available and by what capabilities it had. Only PDG has been able to systematically address all the problems. The authors discuss the BDMS- centered system PDG is now building in some detail. (12 refs).

  16. Development of a database system for the management of non-treated radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Antônio Juscelino; Freire, Carolina Braccini; Cuccia, Valeria; Santos, Paulo de Oliveira; Seles, Sandro Rogério Novaes; Haucz, Maria Judite Afonso, E-mail: ajp@cdtn.br, E-mail: cbf@cdtn.br, E-mail: vc@cdtn.br, E-mail: pos@cdtn.br, E-mail: seless@cdtn.br, E-mail: hauczmj@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    The radioactive waste produced by the research laboratories at CDTN/CNEN, Belo Horizonte, is stored in the Non-Treated Radwaste Storage (DRNT) until the treatment is performed. The information about the waste is registered and the data about the waste must to be easily retrievable and useful for all the staff involved. Nevertheless, it has been kept in an old Paradox database, which is now becoming outdated. Thus, to achieve this goal, a new Database System for the Non-treated Waste will be developed using Access® platform, improving the control and management of solid and liquid radioactive wastes stored in CDTN. The Database System consists of relational tables, forms and reports, preserving all available information. It must to ensure the control of the waste records and inventory. In addition, it will be possible to carry out queries and reports to facilitate the retrievement of the waste history and localization and the contents of the waste packages. The database will also be useful for grouping the waste with similar characteristics to identify the best type of treatment. The routine problems that may occur due to change of operators will be avoided. (author)

  17. Development of a database system for the management of non-treated radioactive waste

    International Nuclear Information System (INIS)

    Pinto, Antônio Juscelino; Freire, Carolina Braccini; Cuccia, Valeria; Santos, Paulo de Oliveira; Seles, Sandro Rogério Novaes; Haucz, Maria Judite Afonso

    2017-01-01

    The radioactive waste produced by the research laboratories at CDTN/CNEN, Belo Horizonte, is stored in the Non-Treated Radwaste Storage (DRNT) until the treatment is performed. The information about the waste is registered and the data about the waste must to be easily retrievable and useful for all the staff involved. Nevertheless, it has been kept in an old Paradox database, which is now becoming outdated. Thus, to achieve this goal, a new Database System for the Non-treated Waste will be developed using Access® platform, improving the control and management of solid and liquid radioactive wastes stored in CDTN. The Database System consists of relational tables, forms and reports, preserving all available information. It must to ensure the control of the waste records and inventory. In addition, it will be possible to carry out queries and reports to facilitate the retrievement of the waste history and localization and the contents of the waste packages. The database will also be useful for grouping the waste with similar characteristics to identify the best type of treatment. The routine problems that may occur due to change of operators will be avoided. (author)

  18. Meta-analysis constrained by data: Recommendations to improve relevance of nutrient management research

    Science.gov (United States)

    Five research teams received funding through the North American 4R Research Fund to conduct meta-analyses of the air and water quality impacts of on-farm 4R nutrient management practices. In compiling or expanding databases for these analyses on environmental and crop production effects, researchers...

  19. Applying the archetype approach to the database of a biobank information management system.

    Science.gov (United States)

    Späth, Melanie Bettina; Grimson, Jane

    2011-03-01

    The purpose of this study is to investigate the feasibility of applying the openEHR archetype approach to modelling the data in the database of an existing proprietary biobank information management system. A biobank information management system stores the clinical/phenotypic data of the sample donor and sample related information. The clinical/phenotypic data is potentially sourced from the donor's electronic health record (EHR). The study evaluates the reuse of openEHR archetypes that have been developed for the creation of an interoperable EHR in the context of biobanking, and proposes a new set of archetypes specifically for biobanks. The ultimate goal of the research is the development of an interoperable electronic biomedical research record (eBMRR) to support biomedical knowledge discovery. The database of the prostate cancer biobank of the Irish Prostate Cancer Research Consortium (PCRC), which supports the identification of novel biomarkers for prostate cancer, was taken as the basis for the modelling effort. First the database schema of the biobank was analyzed and reorganized into archetype-friendly concepts. Then, archetype repositories were searched for matching archetypes. Some existing archetypes were reused without change, some were modified or specialized, and new archetypes were developed where needed. The fields of the biobank database schema were then mapped to the elements in the archetypes. Finally, the archetypes were arranged into templates specifically to meet the requirements of the PCRC biobank. A set of 47 archetypes was found to cover all the concepts used in the biobank. Of these, 29 (62%) were reused without change, 6 were modified and/or extended, 1 was specialized, and 11 were newly defined. These archetypes were arranged into 8 templates specifically required for this biobank. A number of issues were encountered in this research. Some arose from the immaturity of the archetype approach, such as immature modelling support tools

  20. A database system for enhancing fuel records management capabilities

    International Nuclear Information System (INIS)

    Rieke, Phil; Razvi, Junaid

    1994-01-01

    The need to modernize the system of managing a large variety of fuel related data at the TRIGA Reactors Facility at General Atomics, as well as the need to improve NRC nuclear material reporting requirements, prompted the development of a database to cover all aspects of fuel records management. The TRIGA Fuel Database replaces (a) an index card system used for recording fuel movements, (b) hand calculations for uranium burnup, and (c) a somewhat aged and cumbersome system of recording fuel inspection results. It was developed using Microsoft Access, a relational database system for Windows. Instead of relying on various sources for element information, users may now review individual element statistics, record inspection results, calculate element burnup and more, all from within a single application. Taking full advantage of the ease-of-use features designed in to Windows and Access, the user can enter and extract information easily through a number of customized on screen forms, with a wide variety of reporting options available. All forms are accessed through a main 'Options' screen, with the options broken down by categories, including 'Elements', 'Special Elements/Devices', 'Control Rods' and 'Areas'. Relational integrity and data validation rules are enforced to assist in ensuring accurate and meaningful data is entered. Among other items, the database lets the user define: element types (such as FLIP or standard) and subtypes (such as fuel follower, instrumented, etc.), various inspection codes for standardizing inspection results, areas within the facility where elements are located, and the power factors associated with element positions within a reactor. Using fuel moves, power history, power factors and element types, the database tracks uranium burnup and plutonium buildup on a quarterly basis. The Fuel Database was designed with end-users in mind and does not force an operations oriented user to learn any programming or relational database theory in

  1. Decision Support Systems for Research and Management in Advanced Life Support

    Science.gov (United States)

    Rodriquez, Luis F.

    2004-01-01

    Decision support systems have been implemented in many applications including strategic planning for battlefield scenarios, corporate decision making for business planning, production planning and control systems, and recommendation generators like those on Amazon.com(Registered TradeMark). Such tools are reviewed for developing a similar tool for NASA's ALS Program. DSS are considered concurrently with the development of the OPIS system, a database designed for chronicling of research and development in ALS. By utilizing the OPIS database, it is anticipated that decision support can be provided to increase the quality of decisions by ALS managers and researchers.

  2. Reactor pressure vessel embrittlement management through EPRI-Developed material property databases

    International Nuclear Information System (INIS)

    Rosinski, S.T.; Server, W.L.; Griesbach, T.J.

    1997-01-01

    Uncertainties and variability in U.S. reactor pressure vessel (RPV) material properties have caused the U.S. Nuclear Regulatory Commission (NRC) to request information from all nuclear utilities in order to assess the impact of these data scatter and uncertainties on compliance with existing regulatory criteria. Resolving the vessel material uncertainty issues requires compiling all available data into a single integrated database to develop a better understanding of irradiated material property behavior. EPRI has developed two comprehensive databases for utility implementation to compile and evaluate available material property and surveillance data. RPVDATA is a comprehensive reactor vessel materials database and data management program that combines data from many different sources into one common database. Searches of the data can be easily performed to identify plants with similar materials, sort through measured test results, compare the ''best-estimates'' for reported chemistries with licensing basis values, quantify variability in measured weld qualification and test data, identify relevant surveillance results for characterizing embrittlement trends, and resolve uncertainties in vessel material properties. PREP4 has been developed to assist utilities in evaluating existing unirradiated and irradiated data for plant surveillance materials; PREP4 evaluations can be used to assess the accuracy of new trend curve predictions. In addition, searches of the data can be easily performed to identify available Charpy shift and upper shelf data, review surveillance material chemistry and fabrication information, review general capsule irradiation information, and identify applicable source reference information. In support of utility evaluations to consider thermal annealing as a viable embrittlement management option, EPRI is also developing a database to evaluate material response to thermal annealing. Efforts are underway to develop an irradiation

  3. Review of radioactive waste management research in the Agency

    International Nuclear Information System (INIS)

    2002-01-01

    The report presents a concise summary of the Programme of Radioactive Waste Management Research carried out by the Agency in the period 1996 to 2001. It not only provides information, which is relevant to the Agency's responsibilities, but also offers an input to the government's development of a policy for managing solid radioactive waste in the UK. The research projects have included laboratory and field scientific studies, reviews of existing scientific data and understanding, development of assessment methodologies, and development of technical support software and databases. The Agency has participated widely in internationally-supported projects and on jointly-funded projects amongst UK regulators, advisory bodies and industry

  4. Knowledge base technology for CT-DIMS: Report 1. [CT-DIMS (Cutting Tool - Database and Information Management System)

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, E.E.

    1993-05-01

    This report discusses progress on the Cutting Tool-Database and Information Management System (CT-DIMS) project being conducted by the University of Illinois Urbana-Champaign (UIUC) under contract to the Department of Energy. This project was initiated in October 1991 by UIUC. The Knowledge-Based Engineering Systems Research Laboratory (KBESRL) at UIUC is developing knowledge base technology and prototype software for the presentation and manipulation of the cutting tool databases at Allied-Signal Inc., Kansas City Division (KCD). The graphical tool selection capability being developed for CT-DIMS in the Intelligent Design Environment for Engineering Automation (IDEEA) will provide a concurrent environment for simultaneous access to tool databases, tool standard libraries, and cutting tool knowledge.

  5. Data management for community research projects: A JGOFS case study

    Science.gov (United States)

    Lowry, Roy K.

    1992-01-01

    Since the mid 1980s, much of the marine science research effort in the United Kingdom has been focused into large scale collaborative projects involving public sector laboratories and university departments, termed Community Research Projects. Two of these, the Biogeochemical Ocean Flux Study (BOFS) and the North Sea Project incorporated large scale data collection to underpin multidisciplinary modeling efforts. The challenge of providing project data sets to support the science was met by a small team within the British Oceanographic Data Centre (BODC) operating as a topical data center. The role of the data center was to both work up the data from the ship's sensors and to combine these data with sample measurements into online databases. The working up of the data was achieved by a unique symbiosis between data center staff and project scientists. The project management, programming and data processing skills of the data center were combined with the oceanographic experience of the project communities to develop a system which has produced quality controlled, calibrated data sets from 49 research cruises in 3.5 years of operation. The data center resources required to achieve this were modest and far outweighed by the time liberated in the scientific community by the removal of the data processing burden. Two online project databases have been assembled containing a very high proportion of the data collected. As these are under the control of BODC their long term availability as part of the UK national data archive is assured. The success of the topical data center model for UK Community Research Project data management has been founded upon the strong working relationships forged between the data center and project scientists. These can only be established by frequent personal contact and hence the relatively small size of the UK has been a critical factor. However, projects covering a larger, even international scale could be successfully supported by a

  6. The IAEA's Net Enabled Waste Management Database: Overview and current status

    International Nuclear Information System (INIS)

    Csullog, G.W.; Bell, M.J.; Pozdniakov, I.; Petison, G.; Kostitsin, V.

    2002-01-01

    The IAEA's Net Enabled Waste Management Database (NEWMDB) contains information on national radioactive waste management programmes and organizations, plans and activities, relevant laws and regulations, policies and radioactive waste inventories. The NEWMDB, which was launched on the Internet on 6 July 2001, is the successor to the IAEA's Waste Management Database (WMDB), which was in use during the 1990's. The NEWMDB's first data collection cycle took place from July 2001 to March 2002. This paper provides an overview of the NEWMDB, it describes the results of the first data collection cycle, and it discusses the way forward for additional data collection cycles. Three companion papers describe (1) the role of the NEWMDB as an international source of information about radioactive waste management, (2) issues related to the variety of waste classification schemes used by IAEA Member States, and (3) the NEWMDB in the context of an indicator of sustainable development for radioactive waste management. (author)

  7. Research Groups & Research Subjects - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available rch Groups & Research Subjects Data detail Data name Research Groups & Research Sub... Number of data entries 174 entries Data item Description Research ID Research ID (Subject number) Institute...tion Download License Update History of This Database Site Policy | Contact Us Research Groups & Research Subjects - RED | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us RED Resea... Organization Section Section (Department) User name User name Experimental title Experimental title (Rese

  8. HATCHES - a thermodynamic database and management system

    International Nuclear Information System (INIS)

    Cross, J.E.; Ewart, F.T.

    1990-03-01

    The Nirex Safety Assessment Research Programme has been compiling the thermodynamic data necessary to allow simulations of the aqueous behaviour of the elements important to radioactive waste disposal to be made. These data have been obtained from the literature, when available, and validated for the conditions of interest by experiment. In order to maintain these data in an accessible form and to satisfy quality assurance on all data used for assessments, a database has been constructed which resides on a personal computer operating under MS-DOS using the Ashton-Tate dBase III program. This database contains all the input data fields required by the PHREEQE program and, in addition, a body of text which describes the source of the data and the derivation of the PHREEQE input parameters from the source data. The HATCHES system consists of this database, a suite of programs to facilitate the searching and listing of data and a further suite of programs to convert the dBase III files to PHREEQE database format. (Author)

  9. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  10. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    Science.gov (United States)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  11. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  12. CPU and cache efficient management of memory-resident databases

    NARCIS (Netherlands)

    Pirk, H.; Funke, F.; Grund, M.; Neumann, T.; Leser, U.; Manegold, S.; Kemper, A.; Kersten, M.L.

    2013-01-01

    Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations,

  13. CPU and Cache Efficient Management of Memory-Resident Databases

    NARCIS (Netherlands)

    H. Pirk (Holger); F. Funke; M. Grund; T. Neumann (Thomas); U. Leser; S. Manegold (Stefan); A. Kemper (Alfons); M.L. Kersten (Martin)

    2013-01-01

    htmlabstractMemory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current

  14. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  15. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  16. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    cloud models that satisfy different problem 1.2. THE BIG DATA CHALLENGE 3 Enterprise Big Data - Interactive - On-demand - Virtualization - Java ...replication. Data loss can only occur if three drives fail prior to any one of the failures being corrected. Hadoop is written in Java and is installed in a...visible view into a dataset. There are many popular database management systems such as MySQL [4], PostgreSQL [63], and Oracle [5]. Most commonly

  17. Gas Hydrate Research Database and Web Dissemination Channel

    Energy Technology Data Exchange (ETDEWEB)

    Micheal Frenkel; Kenneth Kroenlein; V Diky; R.D. Chirico; A. Kazakow; C.D. Muzny; M. Frenkel

    2009-09-30

    To facilitate advances in application of technologies pertaining to gas hydrates, a United States database containing experimentally-derived information about those materials was developed. The Clathrate Hydrate Physical Property Database (NIST Standard Reference Database {number_sign} 156) was developed by the TRC Group at NIST in Boulder, Colorado paralleling a highly-successful database of thermodynamic properties of molecular pure compounds and their mixtures and in association with an international effort on the part of CODATA to aid in international data sharing. Development and population of this database relied on the development of three components of information-processing infrastructure: (1) guided data capture (GDC) software designed to convert data and metadata into a well-organized, electronic format, (2) a relational data storage facility to accommodate all types of numerical and metadata within the scope of the project, and (3) a gas hydrate markup language (GHML) developed to standardize data communications between 'data producers' and 'data users'. Having developed the appropriate data storage and communication technologies, a web-based interface for both the new Clathrate Hydrate Physical Property Database, as well as Scientific Results from the Mallik 2002 Gas Hydrate Production Research Well Program was developed and deployed at http://gashydrates.nist.gov.

  18. Database application research in real-time data access of accelerator control system

    International Nuclear Information System (INIS)

    Chen Guanghua; Chen Jianfeng; Wan Tianmin

    2012-01-01

    The control system of Shanghai Synchrotron Radiation Facility (SSRF) is a large-scale distributed real-time control system, It involves many types and large amounts of real-time data access during the operating. Database system has wide application prospects in the large-scale accelerator control system. It is the future development direction of the accelerator control system, to replace the differently dedicated data structures with the mature standardized database system. This article discusses the application feasibility of database system in accelerators based on the database interface technology, real-time data access testing, and system optimization research and to establish the foundation of the wide scale application of database system in the SSRF accelerator control system. Based on the database interface technology, real-time data access testing and system optimization research, this article will introduce the application feasibility of database system in accelerators, and lay the foundation of database system application in the SSRF accelerator control system. (authors)

  19. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  20. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    Science.gov (United States)

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  1. THE KNOWLEDGE MANAGEMENT FOR BEST PRACTICES SHARING IN A DATABASE AT THE TRIBUNAL REGIONAL FEDERAL DA PRIMEIRA REGIÃO

    Directory of Open Access Journals (Sweden)

    Márcia Mazo Santos de Miranda

    2010-08-01

    Full Text Available A quick, effective and powerful alternative for knowledge management is the systematic sharing of best practices. This study identified in the literature recommendations for structuring a best practices database and summarized the benefits of its deployment to the Tribunal Regional Federal da Primeira Região TRF - 1ª Região. A It was conducted a quantitative research was then carried out, with the distribuition of where questionnaires were distributed to federal judges of the TRF- 1ª Região, which was divided into 4 parts: magistrate profile, flow of knowledge / information, internal environment, organizational facilitator. As a result, we identified the need to have a best practices database in the Institution for the organizational knowledge identification, transfer and sharing. The conclusion presents recommendations for the development of the database and highlights its importance for knowledge management in an organization.

  2. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  3. Development of subsurface drainage database system for use in environmental management issues

    International Nuclear Information System (INIS)

    Azhar, A.H.; Rafiq, M.; Alam, M.M.

    2007-01-01

    A simple user-friendly menue-driven system for database management pertinent to the Impact of Subsurface Drainage Systems on Land and Water Conditions (ISIAW) has been developed for use in environment-management issues of the drainage areas. This database has been developed by integrating four soft wares, viz; Microsoft Excel, MS Word Acrobat and MS Access. The information, in the form of tables and figures, with respect to various drainage projects has been presented in MS Word files. The major data-sets of various subsurface drainage projects included in the ISLaW database are: i) technical aspects, ii) groundwater and soil-salinity aspects, iii) socio-technical aspects, iv) agro-economic aspects, and v) operation and maintenance aspects. The various ISlAW file can be accessed just by clicking at the Menu buttons of the database system. This database not only gives feed back on the functioning of different subsurface drainage projects, with respect to the above-mentioned aspects, but also serves as a resource-document for these data for future studies on other drainage projects. The developed database-system is useful for planners, designers and Farmers Organisations for improved operation of existing drainage projects as well as development of future ones. (author)

  4. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H. [Research Reactor Section, Vianna (Australia); Shokr, A. [Research Reactor Safety Section, Vienna (Australia); Zeman, A. [International Atomic Energy Agency, Vienna (Australia)

    2013-07-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from

  5. IAEA Coordinated Research Project on the Establishment of a Material Properties Database for Irradiated Core Structural Components for Continued Safe Operation and Lifetime Extension of Ageing Research Reactors

    International Nuclear Information System (INIS)

    Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H.; Shokr, A.; Zeman, A.

    2013-01-01

    Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from the MS

  6. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    Science.gov (United States)

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  7. Research in Institutional Economics in Management Science

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    This report maps research in institutional economics in management science in the European Union for the 1995 to 2002 period. The reports applies Internet search based on a university listing, search on journal databases, key informants and an internet-based survey. 195 researchers are identified....... In (sub-)disciplinary terms, organization, strategy, corporate governance, and international business are the major areas of application of institutional economics ideas. In terms of countries, the EU strongholds are Holland, Denmark, UK, and Germany. There is apparently no or very little relevant...... research in Ireland, Portugal, Luxembourg and Greece. Based on the findings of the report, it seems warranted to characterize the EU research effort in the field as being rather dispersed and uncoordinated. Thus, there are no specialized journals, associations or PhD courses. This state of affairs...

  8. Reldata - a tool for reliability database management

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Babar, A.K.; Sanyasi Rao, V.V.S.; Tharani, Rajiv

    2000-01-01

    Component failure, repair and maintenance data is a very important element of any Probabilistic Safety Assessment study. The credibility of the results of such study is enhanced if the data used is generated from operating experience of similar power plants. Towards this objective, a computerised database is designed, with fields such as, date and time of failure, component name, failure mode, failure cause, ways of failure detection, reactor operating power status, repair times, down time, etc. This leads to evaluation of plant specific failure rate, and on demand failure probability/unavailability for all components. Systematic data updation can provide a real time component reliability parameter statistics and trend analysis and this helps in planning maintenance strategies. A software package has been developed RELDATA, which incorporates the database management and data analysis methods. This report describes the software features and underlying methodology in detail. (author)

  9. Building the Science of Research Management: What Can Research Management Learn from Education Research?

    Science.gov (United States)

    Huang, Jun Song; Hung, Wei Loong

    2018-01-01

    Research management is an emerging field of study and its development is significant to the advancement of research enterprise. Developing the science of research management requires investigating social mechanisms involved in research management. Yet, studies on social mechanisms of research management is lacking in the literature. To address…

  10. Nuclear plant operations, maintenance, and configuration management using three-dimensional computer graphics and databases

    International Nuclear Information System (INIS)

    Tutos, N.C.; Reinschmidt, K.F.

    1987-01-01

    Stone and Webster Engineering Corporation has developed the Plant Digital Model concept as a new approach to Configuration Mnagement of nuclear power plants. The Plant Digital Model development is a step-by-step process, based on existing manual procedures and computer applications, and is fully controllable by the plant managers and engineers. The Plant Digital Model is based on IBM computer graphics and relational database management systems, and therefore can be easily integrated with existing plant databases and corporate management-information systems

  11. Delivering research output to the user using ICT services: Marine contamination database web interface

    International Nuclear Information System (INIS)

    Abdul Muin Abdul Rahman; Abdul Khalik Wood; Zaleha Hashim; Burhanuddin Ahmad; Saaidi Ismail; Mohamad Safuan Sulaiman; Md Suhaimi Elias

    2010-01-01

    This project is about developing a web-based interface for accessing the Marine Contamination database records. The system contains of information pertaining to the occurrence of contaminants and natural elements in the marine eco-system based on samples taken at various locations within the shores of Malaysia in the form of sediment, seawater and marine biota. It represents a systematic approach for recording, storing and managing the vast amount of marine environmental data collected as output of the Marine Contamination and Transport Phenomena Research Project since 1990. The resultant collection of data is to form the background information (or baseline data) which could later be used to monitor the level of marine environmental pollutions around the country. Data collected from the various sampling and related laboratory activities are previously kept in conventional forms such as Excel worksheets and other documents, both in digital and/or paper form. With the help of modern database storage and retrieval techniques, the task of storage and retrieval of data has been made easier and manageable. It can also provide easy access to other parties who are interested in the data. (author)

  12. A survey of the use of database management systems in accelerator projects

    OpenAIRE

    Poole, John; Strubin, Pierre M

    1995-01-01

    The International Accelerator Database Group (IADBG) was set up in 1994 to bring together the people who are working with databases in accelerator laboratories so that they can exchange information and experience. The group now has members from more than 20 institutes from all around the world, representing nearly double this number of projects. This paper is based on the information gathered by the IADBG and describes why commercial DataBase Management Systems (DBMS) are being used in accele...

  13. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  14. Nordic research in logistics and supply chain management

    DEFF Research Database (Denmark)

    Arlbjørn, Jan Stentoft; Jonsson, Patrik; Johansen, John

    2008-01-01

    Purpose - The purpose of this data-based analysis is to report and reflect on the characteristics of the academic discipline concerned with logistics and supply chain management (SCM) as it is conducted in the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden). The paper further seeks...... returned, the response rate was 41 per cent. Findings - The study did not provide a clear picture of a distinct Nordic research paradigm applying to the study of logistics and SCM. The analysis shows as characteristic of research issues pursued by Nordic researchers the focus on supply chains and networks...... with research in the field and external funding. Research limitations/implications - The research reported here may help individual researchers raise their consciousness about their own research. Originality/value - This is the first empirical study to analyze research paradigms within logistics and SCM...

  15. Process variables in organizational stress management intervention evaluation research: a systematic review

    NARCIS (Netherlands)

    Havermans, B.M.; Schelvis, R.M.C.; Boot, C.R.L.; Brouwers, E.P.M.; Anema, J.R.; Beek, A.J. van der

    2016-01-01

    Objectives This systematic review aimed to explore which process variables are used in stress management intervention (SMI) evaluation research. Methods A systematic review was conducted using seven electronic databases. Studies were included if they reported on an SMI aimed at primary or secondary

  16. Process variables in organizational stress management intervention evaluation research : A systematic review

    NARCIS (Netherlands)

    Havermans, B.M.; Schlevis, Roosmarijn Mc; Boot, Cécile Rl; Brouwers, E.P.M.; Anema, Johannes R; van der Beek, Allard J

    2016-01-01

    OBJECTIVES: This systematic review aimed to explore which process variables are used in stress management intervention (SMI) evaluation research. METHODS: A systematic review was conducted using seven electronic databases. Studies were included if they reported on an SMI aimed at primary or

  17. A Middle-Range Explanatory Theory of Self-Management Behavior for Collaborative Research and Practice.

    Science.gov (United States)

    Blok, Amanda C

    2017-04-01

    To report an analysis of the concept of self-management behaviors. Self-management behaviors are typically associated with disease management, with frequent use by nurse researchers related to chronic illness management and by international health organizations for development of disease management interventions. A concept analysis was conducted within the context of Orem's self-care framework. Walker and Avant's eight-step concept analysis approach guided the analysis. Academic databases were searched for relevant literature including CIHAHL, Cochrane Databases of Systematic Reviews and Register of Controlled Trials, MEDLINE, PsycARTICLES and PsycINFO, and SocINDEX. Literature using the term "self-management behavior" and published between April 2001 and March 2015 was analyzed for attributes, antecedents, and consequences. A total of 189 journal articles were reviewed. Self-management behaviors are defined as proactive actions related to lifestyle, a problem, planning, collaborating, and mental support, as well as reactive actions related to a circumstantial change, to achieve a goal influenced by the antecedents of physical, psychological, socioeconomic, and cultural characteristics, as well as collaborative and received support. The theoretical definition and middle-range explanatory theory of self-management behaviors will guide future collaborative research and clinical practice for disease management. © 2016 Wiley Periodicals, Inc.

  18. Benefits of a relational database for computerized management

    International Nuclear Information System (INIS)

    Shepherd, W.W.

    1991-01-01

    This paper reports on a computerized relational database which is the basis for a hazardous materials information management system which is comprehensive, effective, flexible and efficient. The system includes product information for Material Safety Data Sheets (MSDSs), labels, shipping, and the environment and is used in Dowell Schlumberger (DS) operations worldwide for a number of programs including planning, training, emergency response and regulatory compliance

  19. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  20. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis.

    Science.gov (United States)

    Pierson, Kawika; Hand, Michael L; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available.

  1. DANBIO-powerful research database and electronic patient record

    DEFF Research Database (Denmark)

    Hetland, Merete Lund

    2011-01-01

    an overview of the research outcome and presents the cohorts of RA patients. The registry, which is approved as a national quality registry, includes patients with RA, PsA and AS, who are followed longitudinally. Data are captured electronically from the source (patients and health personnel). The IT platform...... as an electronic patient 'chronicle' in routine care, and at the same time provides a powerful research database....

  2. European Vegetation Archive (EVA): an integrated database of European vegetation plots

    DEFF Research Database (Denmark)

    Chytrý, M; Hennekens, S M; Jiménez-Alfaro, B

    2015-01-01

    vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management......The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional...... data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database....

  3. The FoodCast Research Image Database (FRIDa

    Directory of Open Access Journals (Sweden)

    Francesco eForoni

    2013-03-01

    Full Text Available In recent years we have witnessed to an increasing interest in food processing and eating behaviors. This is probably due to several reasons. The biological relevance of food choices, the complexity of the food-rich environment in which we presently live (making food-intake regulation difficult, and the increasing health care cost due to illness associated with food (food hazards, food contamination, and aberrant food-intake. Despite the importance of the issues and the relevance of this research, comprehensive and validated databases of stimuli are rather limited, outdated, or not available for noncommercial purposes to independent researchers who aim at developing their own research program. The FoodCast Research Image Database (FRIDa we present here is comprised of 877 images from eight different categories: natural-food (e.g., strawberry, transformed-food (e.g., French fries, rotten-food (e.g., moldy banana, natural-nonfood items (e.g., pinecone, artificial food-related objects (e.g., teacup, artificial objects (e.g., guitar, animals (e.g., camel, and scenes (e.g., airport. FRIDa has been validated on a sample of healthy participants (N=73 on standard variables (e.g., valence, familiarity etc. as well as on other variables specifically related to food items (e.g., perceived calorie content; it also includes data on the visual features of the stimuli (e.g., brightness, high frequency power etc.. FRIDa is a well-controlled, flexible, validated, and freely available (http://foodcast.sissa.it/neuroscience/ tool for researchers in a wide range of academic fields and industry.

  4. Data management in the TJ-II multi-layer database

    International Nuclear Information System (INIS)

    Vega, J.; Cremy, C.; Sanchez, E.; Portas, A.; Fabregas, J.A.; Herrera, R.

    2000-01-01

    The handling of TJ-II experimental data is performed by means of several software modules. These modules provide the resources for data capture, data storage and management, data access as well as general-purpose data visualisation. Here we describe the module related to data storage and management. We begin by introducing the categories in which data can be classified. Then, we describe the TJ-II data flow through the several file systems involved, before discussing the architecture of the TJ-II database. We review the concept of the 'discharge file' and identify the drawbacks that would result from a direct application of this idea to the TJ-II data. In order to overcome these drawbacks, we propose alternatives based on our concepts of signal family, user work-group and data priority. Finally, we present a model for signal storage. This model is in accordance with the database architecture and provides a proper framework for managing the TJ-II experimental data. In the model, the information is organised in layers and is distributed according to the generality of the information, from the common fields of all signals (first layer), passing through the specific records of signal families (second layer) and reaching the particular information of individual signals (third layer)

  5. Use of an INGRES database to implement the beam parameter management at GANIL

    International Nuclear Information System (INIS)

    Gillette, P.; Lecorche, E.; Lermine, P.; Maugeais, C.; Leboucher, Ch.; Moscatello, M.H.; Pain, P.

    1995-01-01

    Since the beginning of the operation driven by the new Ganil control system in February 1993, the relational database management system (RDBMS) Ingres has been more and more widely used. The most significant application relying on the RDBMS is the new beam parameter management which has been entirely redesigned. It has been operational since the end of the machine shutdown in July this year. After a short recall of the use of Ingres inside the control system, the organization of the parameter management is presented. Then the database implementation is shown, including the way how the physical aspects of the Ganil tuning have been integrated in such an environment. (author)

  6. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    Science.gov (United States)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  7. StreetTiVo: Using a P2P XML Database System to Manage Multimedia Data in Your Living Room

    NARCIS (Netherlands)

    Zhang, Ying; de Vries, A.P.; Boncz, P.; Hiemstra, Djoerd; Ordelman, Roeland J.F.; Li, Qing; Feng, Ling; Pei, Jian; Wang, Sean X.

    StreetTiVo is a project that aims at bringing research results into the living room; in particular, a mix of current results in the areas of Peer-to-Peer XML Database Management System (P2P XDBMS), advanced multimedia analysis techniques, and advanced information re- trieval techniques. The project

  8. Use of Knowledge Bases in Education of Database Management

    Science.gov (United States)

    Radványi, Tibor; Kovács, Emod

    2008-01-01

    In this article we present a segment of Sulinet Digital Knowledgebase curriculum system in which you can find the sections of subject-matter which aid educating the database management. You can follow the order of the course from the beginning when some topics appearance and raise in elementary school, through the topics accomplish in secondary…

  9. Discussions about acceptance of the free software for management and creation of referencial database for papers

    Directory of Open Access Journals (Sweden)

    Flavio Ribeiro Córdula

    2016-03-01

    Full Text Available Objective. This research aimed to determine the degree of acceptance, by the use of the Technology Acceptance Model - TAM, of the developed software, which allows the construction and database management of scientific articles aimed at assisting in the dissemination and retrieval of stored scientific production in digital media. Method. The research is characterized as quantitative, since the TAM, which guided this study is essentially quantitative. A questionnaire developed according to TAM guidelines was used as a tool for data collection. Results. It was possible to verify that this software, despite the need of fixes and improvements inherent to this type of tool, obtained a relevant degree of acceptance by the sample studied. Conciderations. It also should be noted that although this research has been directed to scholars in the field of information science, the idea that justified the creation of the software used in this study might contribute to the development of science in any field of knowledge, aiming at the optimization results of a search conducted in a specialized database can provide.

  10. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, the International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.

  11. Implementation of a database for the management of radioactive sources

    International Nuclear Information System (INIS)

    MOHAMAD, M.

    2012-01-01

    In Madagascar, the application of nuclear technology continues to develop. In order to protect the human health and his environment against the harmful effects of the ionizing radiation, each user of radioactive sources has to implement a program of nuclear security and safety and to declare their sources at Regulatory Authority. This Authority must have access to all the informations relating to all the sources and their uses. This work is based on the elaboration of a software using python as programming language and SQlite as database. It makes possible to computerize the radioactive sources management.This application unifies the various existing databases and centralizes the activities of the radioactive sources management.The objective is to follow the movement of each source in the Malagasy territory in order to avoid the risks related on the use of the radioactive sources and the illicit traffic. [fr

  12. Knowledge production status of Iranian researchers in the gastric cancer area: based on the medline database.

    Science.gov (United States)

    Ghojazadeh, Morteza; Naghavi-Behzad, Mohammad; Nasrolah-Zadeh, Raheleh; Bayat-Khajeh, Parvaneh; Piri, Reza; Mirnia, Keyvan; Azami-Aghdash, Saber

    2014-01-01

    Scientometrics is a useful method for management of financial and human resources and has been applied many times in medical sciences during recent years. The aim of this study was to investigate the status of science production by Iranian scientists in the gastric cancer field based on the Medline database. In this descriptive-cross sectional study Iranian science production concerning gastric cancer during 2000-2011 was investigated based on Medline. After two stages of searching, 121 articles were found, then we reviewed publication date, authors names, journal title, impact factor (IF), and cooperation coefficient between researchers. SPSS.19 was used for statistical analysis. There was a significant increase in published articles about gastric cancer by Iranian researchers in Medline database during 2006-2011. Mean cooperation coefficient between researchers was 6.14±3.29 person per article. Articles of this field were published in 19 countries and 56 journals. Those basex in Thailand, England, and America had the most published Iranian articles. Tehran University of Medical Sciences and Mohammadreza Zali had the most outstanding role in publishing scientific articles. According to results of this study, improving cooperation of researchers in conducting research and scientometric studies about other fields may have an important role in increasing both quality and quantity of published studies.

  13. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    OpenAIRE

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reportin...

  14. FmMDb: a versatile database of foxtail millet markers for millets and bioenergy grasses research.

    Directory of Open Access Journals (Sweden)

    Venkata Suresh B

    Full Text Available The prominent attributes of foxtail millet (Setaria italica L. including its small genome size, short life cycle, inbreeding nature, and phylogenetic proximity to various biofuel crops have made this crop an excellent model system to investigate various aspects of architectural, evolutionary and physiological significances in Panicoid bioenergy grasses. After release of its whole genome sequence, large-scale genomic resources in terms of molecular markers were generated for the improvement of both foxtail millet and its related species. Hence it is now essential to congregate, curate and make available these genomic resources for the benefit of researchers and breeders working towards crop improvement. In view of this, we have constructed the Foxtail millet Marker Database (FmMDb; http://www.nipgr.res.in/foxtail.html, a comprehensive online database for information retrieval, visualization and management of large-scale marker datasets with unrestricted public access. FmMDb is the first database which provides complete marker information to the plant science community attempting to produce elite cultivars of millet and bioenergy grass species, thus addressing global food insecurity.

  15. Use of an INGRES database to implement the beam parameter management at GANIL

    Energy Technology Data Exchange (ETDEWEB)

    Gillette, P.; Lecorche, E.; Lermine, P.; Maugeais, C.; Leboucher, Ch.; Moscatello, M.H.; Pain, P.

    1995-12-31

    Since the beginning of the operation driven by the new Ganil control system in February 1993, the relational database management system (RDBMS) Ingres has been more and more widely used. The most significant application relying on the RDBMS is the new beam parameter management which has been entirely redesigned. It has been operational since the end of the machine shutdown in July this year. After a short recall of the use of Ingres inside the control system, the organization of the parameter management is presented. Then the database implementation is shown, including the way how the physical aspects of the Ganil tuning have been integrated in such an environment. (author). 2 refs.

  16. Use of an INGRES database to implement the beam parameter management at GANIL

    Energy Technology Data Exchange (ETDEWEB)

    Gillette, P; Lecorche, E; Lermine, P; Maugeais, C; Leboucher, Ch; Moscatello, M H; Pain, P

    1996-12-31

    Since the beginning of the operation driven by the new Ganil control system in February 1993, the relational database management system (RDBMS) Ingres has been more and more widely used. The most significant application relying on the RDBMS is the new beam parameter management which has been entirely redesigned. It has been operational since the end of the machine shutdown in July this year. After a short recall of the use of Ingres inside the control system, the organization of the parameter management is presented. Then the database implementation is shown, including the way how the physical aspects of the Ganil tuning have been integrated in such an environment. (author). 2 refs.

  17. Congestion Quantification Using the National Performance Management Research Data Set

    Directory of Open Access Journals (Sweden)

    Virginia P. Sisiopiku

    2017-11-01

    Full Text Available Monitoring of transportation system performance is a key element of any transportation operation and planning strategy. Estimation of dependable performance measures relies on analysis of large amounts of traffic data, which are often expensive and difficult to gather. National databases can assist in this regard, but challenges still remain with respect to data management, accuracy, storage, and use for performance monitoring. In an effort to address such challenges, this paper showcases a process that utilizes the National Performance Management Research Data Set (NPMRDS for generating performance measures for congestion monitoring applications in the Birmingham region. The capabilities of the relational database management system (RDBMS are employed to manage the large amounts of NPMRDS data. Powerful visual maps are developed using GIS software and used to illustrate congestion location, extent and severity. Travel time reliability indices are calculated and utilized to quantify congestion, and congestion intensity measures are developed and employed to rank and prioritize congested segments in the study area. The process for managing and using big traffic data described in the Birmingham case study is a great example that can be replicated by small and mid-size Metropolitan Planning Organizations to generate performance-based measures and monitor congestion in their jurisdictions.

  18. Global capacity, potentials and trends of solid waste research and management.

    Science.gov (United States)

    Nwachukwu, Michael A; Ronald, Mersky; Feng, Huan

    2017-09-01

    In this study, United States, China, India, United Kingdom, Nigeria, Egypt, Brazil, Italy, Germany, Taiwan, Australia, Canada and Mexico were selected to represent the global community. This enabled an overview of solid waste management worldwide and between developed and developing countries. These are countries that feature most in the International Conference on Solid Waste Technology and Management (ICSW) over the past 20 years. A total of 1452 articles directly on solid waste management and technology were reviewed and credited to their original country of research. Results show significant solid waste research potentials globally, with the United States leading by 373 articles, followed by India with 230 articles. The rest of the countries are ranked in the order of: UK > Taiwan > Brazil > Nigeria > Italy > Japan > China > Canada > Germany >Mexico > Egypt > Australia. Global capacity in solid waste management options is in the order of: Waste characterisation-management > waste biotech/composting > waste to landfill > waste recovery/reduction > waste in construction > waste recycling > waste treatment-reuse-storage > waste to energy > waste dumping > waste education/public participation/policy. It is observed that the solid waste research potential is not a measure of solid waste management capacity. The results show more significant research impacts on solid waste management in developed countries than in developing countries where economy, technology and society factors are not strong. This article is targeted to motivate similar study in each country, using solid waste research articles from other streamed databases to measure research impacts on solid waste management.

  19. SNPpy--database management for SNP data from genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Faheem Mitha

    Full Text Available BACKGROUND: We describe SNPpy, a hybrid script database system using the Python SQLAlchemy library coupled with the PostgreSQL database to manage genotype data from Genome-Wide Association Studies (GWAS. This system makes it possible to merge study data with HapMap data and merge across studies for meta-analyses, including data filtering based on the values of phenotype and Single-Nucleotide Polymorphism (SNP data. SNPpy and its dependencies are open source software. RESULTS: The current version of SNPpy offers utility functions to import genotype and annotation data from two commercial platforms. We use these to import data from two GWAS studies and the HapMap Project. We then export these individual datasets to standard data format files that can be imported into statistical software for downstream analyses. CONCLUSIONS: By leveraging the power of relational databases, SNPpy offers integrated management and manipulation of genotype and phenotype data from GWAS studies. The analysis of these studies requires merging across GWAS datasets as well as patient and marker selection. To this end, SNPpy enables the user to filter the data and output the results as standardized GWAS file formats. It does low level and flexible data validation, including validation of patient data. SNPpy is a practical and extensible solution for investigators who seek to deploy central management of their GWAS data.

  20. DESIGN AND CONSTRUCTION OF A FOREST SPATIAL DATABASE: AN APPLICATION

    Directory of Open Access Journals (Sweden)

    Turan Sönmez

    2006-11-01

    Full Text Available General Directorate of Forests (GDF has not yet created the spatial forest database to manage forest and catch the developed countries in forestry. The lack of spatial forest database results in collection of the spatial data redundancy, communication problems among the forestry organizations. Also it causes Turkish forestry to be backward of informatics’ era. To solve these problems; GDF should establish spatial forest database supported Geographic Information System (GIS. To design the spatial database, supported GIS, which provides accurate, on time and current data/info for decision makers and operators in forestry, and to develop sample interface program to apply and monitor classical forest management plans is paramount in contemporary forest management planning process. This research is composed of three major stages: (i spatial rototype database design considering required by the three hierarchical organizations of GDF (regional directorate of forests, forest enterprise, and territorial division, (ii user interface program developed to apply and monitor classical management plans based on the designed database, (iii the implementation of the designed database and its user interface in Artvin Central Planning Unit.

  1. Research data management in academic institutions: A scoping review.

    Directory of Open Access Journals (Sweden)

    Laure Perrier

    Full Text Available The purpose of this study is to describe the volume, topics, and methodological nature of the existing research literature on research data management in academic institutions.We conducted a scoping review by searching forty literature databases encompassing a broad range of disciplines from inception to April 2016. We included all study types and data extracted on study design, discipline, data collection tools, and phase of the research data lifecycle.We included 301 articles plus 10 companion reports after screening 13,002 titles and abstracts and 654 full-text articles. Most articles (85% were published from 2010 onwards and conducted within the sciences (86%. More than three-quarters of the articles (78% reported methods that included interviews, cross-sectional, or case studies. Most articles (68% included the Giving Access to Data phase of the UK Data Archive Research Data Lifecycle that examines activities such as sharing data. When studies were grouped into five dominant groupings (Stakeholder, Data, Library, Tool/Device, and Publication, data quality emerged as an integral element.Most studies relied on self-reports (interviews, surveys or accounts from an observer (case studies and we found few studies that collected empirical evidence on activities amongst data producers, particularly those examining the impact of research data management interventions. As well, fewer studies examined research data management at the early phases of research projects. The quality of all research outputs needs attention, from the application of best practices in research data management studies, to data producers depositing data in repositories for long-term use.

  2. The MANAGE database: nutrient load and site characteristic updates and runoff concentration data.

    Science.gov (United States)

    Harmel, Daren; Qian, Song; Reckhow, Ken; Casebolt, Pamela

    2008-01-01

    The "Measured Annual Nutrient loads from AGricultural Environments" (MANAGE) database was developed to be a readily accessible, easily queried database of site characteristic and field-scale nutrient export data. The original version of MANAGE, which drew heavily from an early 1980s compilation of nutrient export data, created an electronic database with nutrient load data and corresponding site characteristics from 40 studies on agricultural (cultivated and pasture/range) land uses. In the current update, N and P load data from 15 additional studies of agricultural runoff were included along with N and P concentration data for all 55 studies. The database now contains 1677 watershed years of data for various agricultural land uses (703 for pasture/rangeland; 333 for corn; 291 for various crop rotations; 177 for wheat/oats; and 4-33 yr for barley, citrus, vegetables, sorghum, soybeans, cotton, fallow, and peanuts). Across all land uses, annual runoff loads averaged 14.2 kg ha(-1) for total N and 2.2 kg ha(-1) for total P. On average, these losses represented 10 to 25% of applied fertilizer N and 4 to 9% of applied fertilizer P. Although such statistics produce interesting generalities across a wide range of land use, management, and climatic conditions, regional crop-specific analyses should be conducted to guide regulatory and programmatic decisions. With this update, MANAGE contains data from a vast majority of published peer-reviewed N and P export studies on homogeneous agricultural land uses in the USA under natural rainfall-runoff conditions and thus provides necessary data for modeling and decision-making related to agricultural runoff. The current version can be downloaded at http://www.ars.usda.gov/spa/manage-nutrient.

  3. Health technology management: a database analysis as support of technology managers in hospitals.

    Science.gov (United States)

    Miniati, Roberto; Dori, Fabrizio; Iadanza, Ernesto; Fregonara, Mario M; Gentili, Guido Biffi

    2011-01-01

    Technology management in healthcare must continually respond and adapt itself to new improvements in medical equipment. Multidisciplinary approaches which consider the interaction of different technologies, their use and user skills, are necessary in order to improve safety and quality. An easy and sustainable methodology is vital to Clinical Engineering (CE) services in healthcare organizations in order to define criteria regarding technology acquisition and replacement. This article underlines the critical aspects of technology management in hospitals by providing appropriate indicators for benchmarking CE services exclusively referring to the maintenance database from the CE department at the Careggi Hospital in Florence, Italy.

  4. 100 years of applied psychology research on individual careers: From career management to retirement.

    Science.gov (United States)

    Wang, Mo; Wanberg, Connie R

    2017-03-01

    This article surveys 100 years of research on career management and retirement, with a primary focus on work published in the Journal of Applied Psychology. Research on career management took off in the 1920s, with most attention devoted to the development and validation of career interest inventories. Over time, research expanded to attend to broader issues such as the predictors and outcomes of career interests and choice; the nature of career success and who achieves it; career transitions and adaptability to change; retirement decision making and adjustment; and bridge employment. In this article, we provide a timeline for the evolution of the career management and retirement literature, review major theoretical perspectives and findings on career management and retirement, and discuss important future research directions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Cloud database development and management

    CERN Document Server

    Chao, Lee

    2013-01-01

    Nowadays, cloud computing is almost everywhere. However, one can hardly find a textbook that utilizes cloud computing for teaching database and application development. This cloud-based database development book teaches both the theory and practice with step-by-step instructions and examples. This book helps readers to set up a cloud computing environment for teaching and learning database systems. The book will cover adequate conceptual content for students and IT professionals to gain necessary knowledge and hands-on skills to set up cloud based database systems.

  6. Establishing the user requirements for the research reactor decommissioning database system

    International Nuclear Information System (INIS)

    Park, S. K.; Park, H. S.; Lee, G. W.; Park, J. H.

    2002-01-01

    In generally, so much information and data will be raised during the decommissioning activities. It is need a systematical electric system for the management of that. A database system for the decommissioning information and data management from the KRR-1 and 2 decommissioning project is developing now. All information and data will be put into this database system and retrieval also. For the developing the DB system, the basic concept, user requirements were established the then set up the system for categorizing the information and data. The entities of tables for input the data was raised and categorized and then converted the code. The ERD (Entity Relation Diagram) was also set up to show their relation. In need of the developing the user interface system for retrieval the data, is should be studied the analyzing on the relation between the input and output the data. Through this study, as results, the items of output tables are established and categorized according to the requirement of the user interface system for the decommissioning information and data. These tables will be used for designing the prototype and be set up by several feeds back for establishing the decommissioning database system

  7. WikiPathways: a multifaceted pathway database bridging metabolomics to other omics research.

    Science.gov (United States)

    Slenter, Denise N; Kutmon, Martina; Hanspers, Kristina; Riutta, Anders; Windsor, Jacob; Nunes, Nuno; Mélius, Jonathan; Cirillo, Elisa; Coort, Susan L; Digles, Daniela; Ehrhart, Friederike; Giesbertz, Pieter; Kalafati, Marianthi; Martens, Marvin; Miller, Ryan; Nishida, Kozo; Rieswijk, Linda; Waagmeester, Andra; Eijssen, Lars M T; Evelo, Chris T; Pico, Alexander R; Willighagen, Egon L

    2018-01-04

    WikiPathways (wikipathways.org) captures the collective knowledge represented in biological pathways. By providing a database in a curated, machine readable way, omics data analysis and visualization is enabled. WikiPathways and other pathway databases are used to analyze experimental data by research groups in many fields. Due to the open and collaborative nature of the WikiPathways platform, our content keeps growing and is getting more accurate, making WikiPathways a reliable and rich pathway database. Previously, however, the focus was primarily on genes and proteins, leaving many metabolites with only limited annotation. Recent curation efforts focused on improving the annotation of metabolism and metabolic pathways by associating unmapped metabolites with database identifiers and providing more detailed interaction knowledge. Here, we report the outcomes of the continued growth and curation efforts, such as a doubling of the number of annotated metabolite nodes in WikiPathways. Furthermore, we introduce an OpenAPI documentation of our web services and the FAIR (Findable, Accessible, Interoperable and Reusable) annotation of resources to increase the interoperability of the knowledge encoded in these pathways and experimental omics data. New search options, monthly downloads, more links to metabolite databases, and new portals make pathway knowledge more effortlessly accessible to individual researchers and research communities. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  9. Building a recruitment database for asthma trials: a conceptual framework for the creation of the UK Database of Asthma Research Volunteers.

    Science.gov (United States)

    Nwaru, Bright I; Soyiri, Ireneous N; Simpson, Colin R; Griffiths, Chris; Sheikh, Aziz

    2016-05-26

    Randomised clinical trials are the 'gold standard' for evaluating the effectiveness of healthcare interventions. However, successful recruitment of participants remains a key challenge for many trialists. In this paper, we present a conceptual framework for creating a digital, population-based database for the recruitment of asthma patients into future asthma trials in the UK. Having set up the database, the goal is to then make it available to support investigators planning asthma clinical trials. The UK Database of Asthma Research Volunteers will comprise a web-based front-end that interactively allows participant registration, and a back-end that houses the database containing participants' key relevant data. The database will be hosted and maintained at a secure server at the Asthma UK Centre for Applied Research based at The University of Edinburgh. Using a range of invitation strategies, key demographic and clinical data will be collected from those pre-consenting to consider participation in clinical trials. These data will, with consent, in due course, be linkable to other healthcare, social, economic, and genetic datasets. To use the database, asthma investigators will send their eligibility criteria for participant recruitment; eligible participants will then be informed about the new trial and asked if they wish to participate. A steering committee will oversee the running of the database, including approval of usage access. Novel communication strategies will be utilised to engage participants who are recruited into the database in order to avoid attrition as a result of waiting time to participation in a suitable trial, and to minimise the risk of their being approached when already enrolled in a trial. The value of this database will be whether it proves useful and usable to researchers in facilitating recruitment into clinical trials on asthma and whether patient privacy and data security are protected in meeting this aim. Successful recruitment is

  10. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  11. CANGS DB: a stand-alone web-based database tool for processing, managing and analyzing 454 data in biodiversity studies

    Directory of Open Access Journals (Sweden)

    Schlötterer Christian

    2011-06-01

    Full Text Available Abstract Background Next generation sequencing (NGS is widely used in metagenomic and transcriptomic analyses in biodiversity. The ease of data generation provided by NGS platforms has allowed researchers to perform these analyses on their particular study systems. In particular the 454 platform has become the preferred choice for PCR amplicon based biodiversity surveys because it generates the longest sequence reads. Nevertheless, the handling and organization of massive amounts of sequencing data poses a major problem for the research community, particularly when multiple researchers are involved in data acquisition and analysis. An integrated and user-friendly tool, which performs quality control, read trimming, PCR primer removal, and data organization is desperately needed, therefore, to make data interpretation fast and manageable. Findings We developed CANGS DB (Cleaning and Analyzing Next Generation Sequences DataBase a flexible, stand alone and user-friendly integrated database tool. CANGS DB is specifically designed to organize and manage the massive amount of sequencing data arising from various NGS projects. CANGS DB also provides an intuitive user interface for sequence trimming and quality control, taxonomy analysis and rarefaction analysis. Our database tool can be easily adapted to handle multiple sequencing projects in parallel with different sample information, amplicon sizes, primer sequences, and quality thresholds, which makes this software especially useful for non-bioinformaticians. Furthermore, CANGS DB is especially suited for projects where multiple users need to access the data. CANGS DB is available at http://code.google.com/p/cangsdb/. Conclusion CANGS DB provides a simple and user-friendly solution to process, store and analyze 454 sequencing data. Being a local database that is accessible through a user-friendly interface, CANGS DB provides the perfect tool for collaborative amplicon based biodiversity surveys

  12. Updated Palaeotsunami Database for Aotearoa/New Zealand

    Science.gov (United States)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a

  13. Establishment of database and network for research of stream generator and state of the art technology review

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jae Bong; Hur, Nam Su; Moon, Seong In; Seo, Hyeong Won; Park, Bo Kyu; Park, Sung Ho; Kim, Hyung Geun [Sungkyunkwan Univ., Seoul (Korea, Republic of)

    2004-02-15

    A significant number of steam generator tubes are defective and are removed from service or repaired world widely. This wide spread damage has been caused by diverse degradation mechanisms, some of which are difficult to detect and predict. Regarding domestic nuclear power plants, also, the increase of number of operating nuclear power plants and operating periods may result in the increase of steam generator tube failure. So, it is important to carry out the integrity evaluation process to prevent the steam generator tube damage. There are two objectives of this research. The one is to make database for the research of steam generator at domestic research institution. It will increase the efficiency and capability of limited domestic research resources by sharing data and information through network organization. Also, it will enhance the current standard of integrity evaluation procedure that is considerably conservative but can be more reasonable. The second objective is to establish the standard integrity evaluation procedure for steam generator tube by reviewing state of the art technology. The research resources related to steam generator tubes are managed by the established web-based database system. The following topics are covered in this project: development of web-based network for research on steam generator tubes review of state of the art technology.

  14. Establishment of database and network for research of stream generator and state of the art technology review

    International Nuclear Information System (INIS)

    Choi, Jae Bong; Hur, Nam Su; Moon, Seong In; Seo, Hyeong Won; Park, Bo Kyu; Park, Sung Ho; Kim, Hyung Geun

    2004-02-01

    A significant number of steam generator tubes are defective and are removed from service or repaired world widely. This wide spread damage has been caused by diverse degradation mechanisms, some of which are difficult to detect and predict. Regarding domestic nuclear power plants, also, the increase of number of operating nuclear power plants and operating periods may result in the increase of steam generator tube failure. So, it is important to carry out the integrity evaluation process to prevent the steam generator tube damage. There are two objectives of this research. The one is to make database for the research of steam generator at domestic research institution. It will increase the efficiency and capability of limited domestic research resources by sharing data and information through network organization. Also, it will enhance the current standard of integrity evaluation procedure that is considerably conservative but can be more reasonable. The second objective is to establish the standard integrity evaluation procedure for steam generator tube by reviewing state of the art technology. The research resources related to steam generator tubes are managed by the established web-based database system. The following topics are covered in this project: development of web-based network for research on steam generator tubes review of state of the art technology

  15. MIPS PlantsDB: a database framework for comparative plant genome research.

    Science.gov (United States)

    Nussbaumer, Thomas; Martis, Mihaela M; Roessner, Stephan K; Pfeifer, Matthias; Bader, Kai C; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB-plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834-D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB.

  16. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  17. Hazardous waste database: Waste management policy implications for the US Department of Energy's Environmental Restoration and Waste Management Programmatic Environmental Impact Statement

    International Nuclear Information System (INIS)

    Lazaro, M.A.; Policastro, A.J.; Antonopoulos, A.A.; Hartmann, H.M.; Koebnick, B.; Dovel, M.; Stoll, P.W.

    1994-01-01

    The hazardous waste risk assessment modeling (HaWRAM) database is being developed to analyze the risk from treatment technology operations and potential transportation accidents associated with the hazardous waste management alternatives. These alternatives are being assessed in the Department of Energy's Environmental Restoration and Waste Management Programmatic Environmental Impact Statement (EM PEIS). To support the risk analysis, the current database contains complexwide detailed information on hazardous waste shipments from 45 Department of Energy installations during FY 1992. The database is currently being supplemented with newly acquired data. This enhancement will improve database information on operational hazardous waste generation rates, and the level and type of current on-site treatment at Department of Energy installations

  18. Role of Database Management Systems in Selected Engineering Institutions of Andhra Pradesh: An Analytical Survey

    Directory of Open Access Journals (Sweden)

    Kutty Kumar

    2016-06-01

    Full Text Available This paper aims to analyze the function of database management systems from the perspective of librarians working in engineering institutions in Andhra Pradesh. Ninety-eight librarians from one hundred thirty engineering institutions participated in the study. The paper reveals that training by computer suppliers and software packages are the significant mode of acquiring DBMS skills by librarians; three-fourths of the librarians are postgraduate degree holders. Most colleges use database applications for automation purposes and content value. Electrical problems and untrained staff seem to be major constraints faced by respondents for managing library databases.

  19. Standardizing terminology and definitions of medication adherence and persistence in research employing electronic databases.

    Science.gov (United States)

    Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F

    2013-08-01

    To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.

  20. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko; Gojobori, Takashi

    2015-01-01

    has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed

  1. Database to manage personal dosimetry Hospital Universitario de La Ribera

    International Nuclear Information System (INIS)

    Melchor, M.; Martinez, D.; Asensio, M.; Candela, F.; Camara, A.

    2011-01-01

    For the management of professionally exposed personnel dosimetry, da La are required for the use and return of dosimeters. in the Department of Radio Physics and Radiation Protection have designed and implemented a database management staff dosimetry Hospital and Area Health Centers. The specific objectives were easily import data from the National Center dosimetric dosimetry, consulting records in a simple dosimetry, dosimeters allow rotary handle, and also get reports from different periods of time to know the return data for users, services, etc.

  2. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system

  3. Development of an integrated database management system to evaluate integrity of flawed components of nuclear power plant

    International Nuclear Information System (INIS)

    Mun, H. L.; Choi, S. N.; Jang, K. S.; Hong, S. Y.; Choi, J. B.; Kim, Y. J.

    2001-01-01

    The object of this paper is to develop an NPP-IDBMS(Integrated DataBase Management System for Nuclear Power Plants) for evaluating the integrity of components of nuclear power plant using relational data model. This paper describes the relational data model, structure and development strategy for the proposed NPP-IDBMS. The NPP-IDBMS consists of database, database management system and interface part. The database part consists of plant, shape, operating condition, material properties and stress database, which are required for the integrity evaluation of each component in nuclear power plants. For the development of stress database, an extensive finite element analysis was performed for various components considering operational transients. The developed NPP-IDBMS will provide efficient and accurate way to evaluate the integrity of flawed components

  4. Name Authority Challenges for Indexing and Abstracting Databases

    OpenAIRE

    Denise Beaubien Bennett; Priscilla Williams

    2006-01-01

    Objective - This analysis explores alternative methods for managing author name changes in Indexing and Abstarcting (I&A) databases. A searcher may retrieve incomplete or inaccurate results when the database provides no or faulty assistance in linking author name variations. Methods - The article includes an analysis of current name authority practices in I&A databases and of selected research into name disambiguation models applied to authorship of articles. Results - Several potential...

  5. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting....... By adopting such an abstract data model, the platform enables new and improved services for researchers, librarians and administrators....

  6. The Net Enabled Waste Management Database in the context of an indicator of sustainable development for radioactive waste management

    International Nuclear Information System (INIS)

    Csullog, G.W.; Selling, H.; Holmes, R.; Benitez, J.C.

    2002-01-01

    The IAEA was selected by the UN to be the lead agency for the development and implementation of indicators of sustainable development for radioactive waste management (ISD-RW). Starting in late 1999, the UN initiated a program to consolidate a large number of indicators into a smaller set and advised the IAEA that a single ISD-RW was needed. In September 2001, a single indicator was developed by the IAEA and subsequently revised in February 2002. In parallel with its work on the ISD-RW, the IAEA developed and implemented the Net Enabled Waste Management Database (NEWMDB). The NEWMDB is an international database to collect, compile and disseminate information about nationally-based radioactive waste management programmes and waste inventories. The first data collection cycle with the NEWMDB (July 2001 to March 2002) demonstrated that much of the information needed to calculate the ISD-RW could be collected by the IAEA for its international database. However, the first data collection cycle indicated that capacity building, in the area of identifying waste classification schemes used in countries, is required. (author)

  7. The Net Enabled Waste Management Database as an international source of radioactive waste management information

    International Nuclear Information System (INIS)

    Csullog, G.W.; Friedrich, V.; Miaw, S.T.W.; Tonkay, D.; Petoe, A.

    2002-01-01

    The IAEA's Net Enabled Waste Management Database (NEWMDB) is an integral part of the IAEA's policies and strategy related to the collection and dissemination of information, both internal to the IAEA in support of its activities and external to the IAEA (publicly available). The paper highlights the NEWMDB's role in relation to the routine reporting of status and trends in radioactive waste management, in assessing the development and implementation of national systems for radioactive waste management, in support of a newly developed indicator of sustainable development for radioactive waste management, in support of reporting requirements for the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management, in support of IAEA activities related to the harmonization of waste management information at the national and international levels and in relation to the management of spent/disused sealed radioactive sources. (author)

  8. Some Considerations about Modern Database Machines

    Directory of Open Access Journals (Sweden)

    Manole VELICANU

    2010-01-01

    Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.

  9. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  10. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    Science.gov (United States)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others

  11. Management of radiological related equipments. Creating the equipment management database and analysis of the repair and maintenance records

    International Nuclear Information System (INIS)

    Eguchi, Megumu; Taguchi, Keiichi; Oota, Takashi; Kajiwara, Hiroki; Ono, Kiyotune; Hagio, Kiyofumi; Uesugi, Ekizo; Kajishima, Tetuo; Ueda, Kenji

    2002-01-01

    In 1997, we established the committee of equipments maintenance and management in our department. We designed the database in order to classify and register all the radiological related equipments using Microsoft Access. The management of conditions and cost of each equipment has become easier, by keeping and recording the database in the equipments management ledger and by filing the history of repairs or maintenances occurred to modalities. We then accounted numbers, cost of repairs and downtimes from the data of the repair and maintenance records for four years, and we reexamined the causal analysis of failures and the contents of the regular maintenance for CT and MRI equipments that had shown the higher numbers of repairs. Consequently, we have found the improvement of registration method of the data and the more economical way to use of the cost of repair. (author)

  12. Towards the management of the databases founded on descriptions ...

    African Journals Online (AJOL)

    The canonical model is defined in the concept language, developed in our research ... the notion of classes to produce descriptions which are, also, used in the reasoning process. ... Keys-Words: Descriptions logic/ Databases/ Semantics.

  13. Research Outputs of England's Hospital Episode Statistics (HES) Database: Bibliometric Analysis.

    Science.gov (United States)

    Chaudhry, Zain; Mannan, Fahmida; Gibson-White, Angela; Syed, Usama; Ahmed, Shirin; Majeed, Azeem

    2017-12-06

    Hospital administrative data, such as those provided by the Hospital Episode Statistics (HES) database in England, are increasingly being used for research and quality improvement. To date, no study has tried to quantify and examine trends in the use of HES for research purposes. To examine trends in the use of HES data for research. Publications generated from the use of HES data were extracted from PubMed and analysed. Publications from 1996 to 2014 were then examined further in the Science Citation Index (SCI) of the Thompson Scientific Institute for Science Information (Web of Science) for details of research specialty area. 520 studies, categorised into 44 specialty areas, were extracted from PubMed. The review showed an increase in publications over the 18-year period with an average of 27 publications per year, however with the majority of output observed in the latter part of the study period. The highest number of publications was in the Health Statistics specialty area. The use of HES data for research is becoming more common. Increase in publications over time shows that researchers are beginning to take advantage of the potential of HES data. Although HES is a valuable database, concerns exist over the accuracy and completeness of the data entered. Clinicians need to be more engaged with HES for the full potential of this database to be harnessed.

  14. Accessing the public MIMIC-II intensive care relational database for clinical research.

    Science.gov (United States)

    Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G

    2013-01-10

    The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.

  15. A Relational Database of WHO Mortality Data Prepared to Facilitate Global Mortality Research

    Directory of Open Access Journals (Sweden)

    Albert de Roos

    2015-09-01

    Full Text Available Detailed world mortality data such as collected by the World Health Organization gives a wealth of information about causes of death worldwide over a time span of 60 year. However, the raw mortality data in text format as provided by the WHO is not directly suitable for systematic research and data mining. In this Data Paper, a relational database is presented that is created from the raw WHO mortality data set and includes mortality rates, an ICD-code table and country reference data. This enriched database, as a corpus of global mortality data, can be readily imported in relational databases but can also function as the data source for other types of databases. The use of this database can therefore greatly facilitate global epidemiological research that may provide new clues to genetic or environmental factors in the origins of diseases.

  16. Information flow in the DAMA project beyond database managers: information flow managers

    Science.gov (United States)

    Russell, Lucian; Wolfson, Ouri; Yu, Clement

    1996-12-01

    To meet the demands of commercial data traffic on the information highway, a new look at managing data is necessary. One projected activity, sharing of point of sale information, is being considered in the Demand Activated Manufacturing Project (DAMA) of the American Textile Partnership (AMTEX) project. A scenario is examined in which 100 000 retail outlets communicate over a period of days. They provide the latest estimate of demand for sewn products across a chain of 26 000 suppliers through the use of bill of materials explosions at four levels of detail. Enabling this communication requires an approach that shares common features with both workflows and database management. A new paradigm, the information flow manager, is developed to handle this situation, including the case where members of the supply chain fail to communicate and go out of business. Techniques for approximation are introduced so as to keep estimates of demand as current as possible.

  17. Security and Health Research Databases: The Stakeholders and Questions to Be Addressed

    OpenAIRE

    Stewart, Sara

    2006-01-01

    Health research database security issues abound. Issues include subject confidentiality, data ownership, data integrity and data accessibility. There are also various stakeholders in database security. Each of these stakeholders has a different set of concerns and responsibilities when dealing with security issues. There is an obvious need for training in security issues, so that these issues may be addressed and health research will move on without added obstacles based on misunderstanding s...

  18. Security and health research databases: the stakeholders and questions to be addressed.

    Science.gov (United States)

    Stewart, Sara

    2006-01-01

    Health research database security issues abound. Issues include subject confidentiality, data ownership, data integrity and data accessibility. There are also various stakeholders in database security. Each of these stakeholders has a different set of concerns and responsibilities when dealing with security issues. There is an obvious need for training in security issues, so that these issues may be addressed and health research will move on without added obstacles based on misunderstanding security methods and technologies.

  19. Report on the first Twente Data Management Workshop on XML Databases and Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Mihajlovic, V.

    2004-01-01

    The Database Group of the University of Twente initiated a new series of workshops called Twente Data Management workshops (TDM), starting with one on XML Databases and Information Retrieval which took place on 21 June 2004 at the University of Twente. We have set ourselves two goals for the

  20. Database Foundation For The Configuration Management Of The CERN Accelerator Controls Systems

    CERN Document Server

    Zaharieva, Z; Peryt, M

    2011-01-01

    The Controls Configuration Database (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering of change management processes as well as providing status accounting and aud...

  1. An attempt to develop a database for epidemiological research in Semipalatinsk

    International Nuclear Information System (INIS)

    Katayama, Hiroaki; Apsalikov, K.N.; Gusev, B.I.; Galich, B.; Madieva, M.; Koshpessova, G.; Abdikarimova, A.; Hoshi, Masaharu

    2006-01-01

    The present paper reports progress and problems in our development of a database for comprehensive epidemiological research in Semipalatinsk whose ultimate aim is to examine the effects of low dose radiation exposure on the human body. The database was constructed and set up at the Scientific Research Institute of Radiation Medicine Ecology in 2003, and the number of data entries into the database reached 110,000 on 31 January 2005. However, we face some problems concerning size, accuracy and reliability of data which hinder full epidemiological analysis. Firstly we need fuller bias free data. The second task is to establish a committee for a discussion of the analysis, which should be composed of statisticians and epidemiologists, to conduct a research project from a long-term perspective, and carry out the collection of data effectively, along the lines of the project. Due to the insufficiency of data collected so far, our analysis is limited to showing the trends in mortality rates in the high and low dose areas. (author)

  2. Development of a Framework for Multimodal Research: Creation of a Bibliographic Database

    National Research Council Canada - National Science Library

    Coovert, Michael D; Gray, Ashley A; Elliott, Linda R; Redden, Elizabeth S

    2007-01-01

    .... The results of the overall effort, the multimodal framework and article tracking sheet, bibliographic database, and searchable multimodal database make substantial and valuable contributions to the accumulation and interpretation of multimodal research. References collected in this effort are listed in the appendix.

  3. Linking international trademark databases to inform IP research and policy

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, P.

    2016-07-01

    Researchers and policy makers are concerned with many international issues regarding trademarks, such as trademark squatting, cluttering, and dilution. Trademark application data can provide an evidence base to inform government policy regarding these issues, and can also produce quantitative insights into economic trends and brand dynamics. Currently, national trademark databases can provide insight into economic and brand dynamics at the national level, but gaining such insight at an international level is more difficult due to a lack of internationally linked trademark data. We are in the process of building a harmonised international trademark database (the “Patstat of trademarks”), in which equivalent trademarks have been identified across national offices. We have developed a pilot database that incorporates 6.4 million U.S., 1.3 million Australian, and 0.5 million New Zealand trademark applications, spanning over 100 years. The database will be extended to incorporate trademark data from other participating intellectual property (IP) offices as they join the project. Confirmed partners include the United Kingdom, WIPO, and OHIM. We will continue to expand the scope of the project, and intend to include many more IP offices from around the world. In addition to building the pilot database, we have developed a linking algorithm that identifies equivalent trademarks (TMs) across the three jurisdictions. The algorithm can currently be applied to all applications that contain TM text; i.e. around 96% of all applications. In its current state, the algorithm successfully identifies ~ 97% of equivalent TMs that are known to be linked a priori, as they have shared international registration number through the Madrid protocol. When complete, the internationally linked trademark database will be a valuable resource for researchers and policy-makers in fields such as econometrics, intellectual property rights, and brand policy. (Author)

  4. MPD3: a useful medicinal plants database for drug designing.

    Science.gov (United States)

    Mumtaz, Arooj; Ashfaq, Usman Ali; Ul Qamar, Muhammad Tahir; Anwar, Farooq; Gulzar, Faisal; Ali, Muhammad Amjad; Saari, Nazamid; Pervez, Muhammad Tariq

    2017-06-01

    Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .

  5. Towards efficient use of research resources: a nationwide database of ongoing primary care research projects in the Netherlands.

    Science.gov (United States)

    Kortekaas, Marlous F; van de Pol, Alma C; van der Horst, Henriëtte E; Burgers, Jako S; Slort, Willemjan; de Wit, Niek J

    2014-04-01

    PURPOSE. Although in the last decades primary care research has evolved with great success, there is a growing need to prioritize the topics given the limited resources available. Therefore, we constructed a nationwide database of ongoing primary care research projects in the Netherlands, and we assessed if the distribution of research topics matched with primary care practice. We conducted a survey among the main primary care research centres in the Netherlands and gathered details of all ongoing primary care research projects. We classified the projects according to research topic, relation to professional guidelines and knowledge deficits, collaborative partners and funding source. Subsequently, we compared the frequency distribution of clinical topics of research projects to the prevalence of problems in primary care practice. We identified 296 ongoing primary care research projects from 11 research centres. Most projects were designed as randomized controlled trial (35%) or observational cohort (34%), and government funded mostly (60%). Thematically, most research projects addressed chronic diseases, mainly cardiovascular risk management (8%), depressive disorders (8%) and diabetes mellitus (7%). One-fifth of the projects was related to defined knowledge deficits in primary care guidelines. From a clinical primary care perspective, research projects on dermatological problems were significantly underrepresented (P = 0.01). This survey of ongoing projects demonstrates that primary care research has a firm basis in the Netherlands, with a strong focus on chronic disease. The fit with primary care practice can improve, and future research should address knowledge deficits in professional guidelines more.

  6. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  7. Use of SQL Databases to Support Human Resource Management

    OpenAIRE

    Zeman, Jan

    2011-01-01

    Bakalářská práce se zaměřuje na návrh SQL databáze pro podporu Řízení lidských zdrojů a její následné vytvoření v programu MS SQL Server. This thesis focuses on the design of SQL database for support Human resources management and its creation in MS SQL Server. A

  8. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  9. Database And Interface Modifications: Change Management Without Affecting The Clients

    CERN Document Server

    Peryt, M; Martin Marquez, M; Zaharieva, Z

    2011-01-01

    The first Oracle®-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN’s Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various APIs? In this paper we will provide our answers to these questions, and to many more.

  10. Are Managed Futures Indices Telling Truth? Biases in CTA Databases and Proposals of Potential Enhancements

    Directory of Open Access Journals (Sweden)

    Adam Zaremba

    2011-07-01

    Full Text Available Managed futures are an alternative asset class which has recently became considerably popular among investment industry. However, due to its characteristics, access to managed futures historical performance statistics is relatively confined. All available information originates from commercial and academic databases, reporting to which is entirely voluntary. This situation results in series of biases which distort the managed futures performance in the eyes of investors. The paper consists of two parts. First, the author reviews and describes various biases that influence the reliability of the managed futures indices and databases. The second section encompasses author’s proposals of potential enhancements, which aim to reduce the impact of the biases in order to derive a benchmark that could better reflect characteristics of managed futures investment from the point of view of a potential investor.

  11. Training Database Technology in DBMS MS Access

    Directory of Open Access Journals (Sweden)

    Nataliya Evgenievna Surkova

    2015-05-01

    Full Text Available The article describes the methodological issues of learning relational database technology and management systems relational databases. DBMS Microsoft Access is the primer for learning of DBMS. This methodology allows to generate some general cultural competence, such as the possession of the main methods, ways and means of production, storage and processing of information, computer skills as a means of managing information. Also must formed professional competence such as the ability to collect, analyze and process the data necessary for solving the professional tasks, the ability to use solutions for analytical and research tasks modern technology and information technology.

  12. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  13. TR32DB - Management of Research Data in a Collaborative, Interdisciplinary Research Project

    Science.gov (United States)

    Curdt, Constanze; Hoffmeister, Dirk; Waldhoff, Guido; Lang, Ulrich; Bareth, Georg

    2015-04-01

    The management of research data in a well-structured and documented manner is essential in the context of collaborative, interdisciplinary research environments (e.g. across various institutions). Consequently, set-up and use of a research data management (RDM) system like a data repository or project database is necessary. These systems should accompany and support scientists during the entire research life cycle (e.g. data collection, documentation, storage, archiving, sharing, publishing) and operate cross-disciplinary in interdisciplinary research projects. Challenges and problems of RDM are well-know. Consequently, the set-up of a user-friendly, well-documented, sustainable RDM system is essential, as well as user support and further assistance. In the framework of the Transregio Collaborative Research Centre 32 'Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation' (CRC/TR32), funded by the German Research Foundation (DFG), a RDM system was self-designed and implemented. The CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). Very heterogeneous research data are considered, which are resulting from field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes like publications, conference contributions, PhD reports and corresponding images are regarded. The TR32DB project database is set-up in cooperation with the Regional Computing Centre of the University of Cologne (RRZK) and also located in this hardware environment. The TR32DB system architecture is composed of three main components: (i) a file-based data

  14. A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2017-04-01

    Full Text Available With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data.

  15. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  16. Optimized Database of Higher Education Management Using Data Warehouse

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2010-04-01

    Full Text Available The emergence of new higher education institutions has created the competition in higher education market, and data warehouse can be used as an effective technology tools for increasing competitiveness in the higher education market. Data warehouse produce reliable reports for the institution’s high-level management in short time for faster and better decision making, not only on increasing the admission number of students, but also on the possibility to find extraordinary, unconventional funds for the institution. Efficiency comparison was based on length and amount of processed records, total processed byte, amount of processed tables, time to run query and produced record on OLTP database and data warehouse. Efficiency percentages was measured by the formula for percentage increasing and the average efficiency percentage of 461.801,04% shows that using data warehouse is more powerful and efficient rather than using OLTP database. Data warehouse was modeled based on hypercube which is created by limited high demand reports which usually used by high level management. In every table of fact and dimension fields will be inserted which represent the loading constructive merge where the ETL (Extraction, Transformation and Loading process is run based on the old and new files.

  17. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    Science.gov (United States)

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  18. Towards an international authoritative system for coordination and management of a unique recommended k0-NAA database

    International Nuclear Information System (INIS)

    De Corte, F.

    2010-01-01

    This paper describes the evolution of the database in k 0 -standardized neutron activation analysis (k 0 -NAA), ranging from its full supervision by the founders of the k 0 -method at the Institute for Nuclear Sciences (INW)/Gent and the Central Research Institute for Physics (KFKI)/Budapest (from about the mid 1970s up to the early 1990s), to the present situation (roughly speaking starting with the first k 0 Users Workshop in 1992) where an increasing number of researchers from institutes allover the world are reporting on their experimental work aiming at the improvement and extension of the existing database. Although these individual contributions are undoubtedly commendable, the resulting fragmentary data sets leave behind important questions with respect to interpretation, evaluation, integration and recommendation, as illustrated with the (extreme) example of 131 Ba. This situation urgently calls for establishing and managing an international authoritative system for the coordination and quality control of a unique database with recommended data for k 0 -NAA, considering such parameters as accuracy, traceability and consistency. In the present paper, it is proposed to entrust this task to a standing 'Reference k 0 -Data Subcommittee' of the k 0 -ISC (k 0 International Scientific Committee).

  19. Management research

    International Nuclear Information System (INIS)

    Berry, M.

    1988-01-01

    The 1988 progress report of the Management Research center (Polytechnic School, France), is presented. The Center research programs include the management of different organizations, such as industry, administrative systems, hospitals and cultural systems. The investigations performed concern the improvement and better knowledge of the new methods of analysis: the role of the speech, the logic conflicts; the crisis development, symptoms and effects; the relationship between the management practices and the prevailing ideas or theories. The approach adopted by the scientists involves the accurate analysis of the essential management activities. The investigations carried out in 1988 are summarized. The published papers, the congress communications and the thesis are listed [fr

  20. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    Science.gov (United States)

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  1. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Braams, Bastiaan J.; Chung, Hyun-Kyung [Nuclear Data Section, NAPC Division, International Atomic Energy Agency P. O. Box 100, Vienna International Centre, AT-1400 Vienna (Austria)

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  2. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Science.gov (United States)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  3. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    International Nuclear Information System (INIS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-01-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  4. The Camden & Islington Research Database: Using electronic mental health records for research.

    Science.gov (United States)

    Werbeloff, Nomi; Osborn, David P J; Patel, Rashmi; Taylor, Matthew; Stewart, Robert; Broadbent, Matthew; Hayes, Joseph F

    2018-01-01

    Electronic health records (EHRs) are widely used in mental health services. Case registers using EHRs from secondary mental healthcare have the potential to deliver large-scale projects evaluating mental health outcomes in real-world clinical populations. We describe the Camden and Islington NHS Foundation Trust (C&I) Research Database which uses the Clinical Record Interactive Search (CRIS) tool to extract and de-identify routinely collected clinical information from a large UK provider of secondary mental healthcare, and demonstrate its capabilities to answer a clinical research question regarding time to diagnosis and treatment of bipolar disorder. The C&I Research Database contains records from 108,168 mental health patients, of which 23,538 were receiving active care. The characteristics of the patient population are compared to those of the catchment area, of London, and of England as a whole. The median time to diagnosis of bipolar disorder was 76 days (interquartile range: 17-391) and median time to treatment was 37 days (interquartile range: 5-194). Compulsory admission under the UK Mental Health Act was associated with shorter intervals to diagnosis and treatment. Prior diagnoses of other psychiatric disorders were associated with longer intervals to diagnosis, though prior diagnoses of schizophrenia and related disorders were associated with decreased time to treatment. The CRIS tool, developed by the South London and Maudsley NHS Foundation Trust (SLaM) Biomedical Research Centre (BRC), functioned very well at C&I. It is reassuring that data from different organizations deliver similar results, and that applications developed in one Trust can then be successfully deployed in another. The information can be retrieved in a quicker and more efficient fashion than more traditional methods of health research. The findings support the secondary use of EHRs for large-scale mental health research in naturalistic samples and settings investigated across large

  5. ENHANCING SEISMIC CALIBRATION RESEARCH THROUGH SOFTWARE AUTOMATION AND SCIENTIFIC INFORMATION MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Ruppert, S; Dodge, D A; Ganzberger, M D; Hauk, T F; Matzel, E M

    2008-07-03

    The National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Development (GNEMRD) Program at LLNL continues to make significant progress enhancing the process of deriving seismic calibrations and performing scientific integration, analysis, and information management with software automation tools. Our tool efforts address the problematic issues of very large datasets and varied formats encountered during seismic calibration research. New information management and analysis tools have resulted in demonstrated gains in efficiency of producing scientific data products and improved accuracy of derived seismic calibrations. The foundation of a robust, efficient data development and processing environment is comprised of many components built upon engineered versatile libraries. We incorporate proven industry 'best practices' throughout our code and apply source code and bug tracking management as well as automatic generation and execution of unit tests for our experimental, development and production lines. Significant software engineering and development efforts have produced an object-oriented framework that provides database centric coordination between scientific tools, users, and data. Over a half billion parameters, signals, measurements, and metadata entries are all stored in a relational database accessed by an extensive object-oriented multi-technology software framework that includes stored procedures, real-time transactional database triggers and constraints, as well as coupled Java and C++ software libraries to handle the information interchange and validation requirements. Significant resources were applied to schema design to enable management of processing methods and station parameters, responses and metadata. This allowed for the development of merged ground-truth (GT) data sets compiled by the NNSA labs and AFTAC that include hundreds of thousands of events and tens of millions of arrivals. The

  6. Respiratory infections research in afghanistan: bibliometric analysis with the database pubmed

    International Nuclear Information System (INIS)

    Pilsezek, F.H.

    2015-01-01

    Infectious diseases research in a low-income country like Afghanistan is important. Methods: In this study an internet-based database Pubmed was used for bibliometric analysis of infectious diseases research activity. Research publications entries in PubMed were analysed according to number of publications, topic, publication type, and country of investigators. Results: Between 2002-2011, 226 (77.7%) publications with the following research topics were identified: respiratory infections 3 (1.3%); parasites 8 (3.5%); diarrhoea 10 (4.4%); tuberculosis 10 (4.4%); human immunodeficiency virus (HIV) 11(4.9%); multi-drug resistant bacteria (MDR) 18(8.0%); polio 31(13.7%); leishmania 31(13.7%); malaria 46(20.4%). From 2002-2011, 11 (4.9%) publications were basic science laboratory-based research studies. Between 2002-2011, 8 (3.5%) publications from Afghan institutions were identified. Conclusion: In conclusion, the internet-based database Pubmed can be consulted to collect data for guidance of infectious diseases research activity of low-income countries. The presented data suggest that infectious diseases research in Afghanistan is limited for respiratory infections research, has few studies conducted by Afghan institutions, and limited laboratory-based research contributions. (author)

  7. RESPIRATORY INFECTIONS RESEARCH IN AFGHANISTAN: BIBLIOMETRIC ANALYSIS WITH THE DATABASE PUBMED.

    Science.gov (United States)

    Pilsczek, Florian H

    2015-01-01

    Infectious diseases research in a low-income country like Afghanistan is important. In this study an internet-based database Pubmed was used for bibliometric analysis of infectious diseases research activity. Research publications entries in PubMed were analysed according to number of publications, topic, publication type, and country of investigators. Between 2002-2011, 226 (77.7%) publications with the following research topics were identified: respiratory infections 3 (1.3%); parasites 8 (3.5%); diarrhoea 10 (4.4%); tuberculosis 10 (4.4%); human immunodeficiency virus (HIV) 11 (4.9%); multi-drug resistant bacteria (MDR) 18 (8.0%); polio 31 (13.7%); leishmania 31 (13.7%); malaria 46 (20.4%). From 2002-2011, 11 (4.9%) publications were basic science laboratory-based research studies. Between 2002-2011, 8 (3.5%) publications from Afghan institutions were identified. In conclusion, the internet-based database Pubmed can be consulted to collect data for guidance of infectious diseases research activity of low-income countries. The presented data suggest that infectious diseases research in Afghanistan is limited for respiratory infections research, has few studies conducted by Afghan institutions, and limited laboratory-based research contributions.

  8. How Database Management Systems Can Be Used To Evaluate Program Effectiveness in Small School Districts.

    Science.gov (United States)

    Hoffman, Tony

    Sophisticated database management systems (DBMS) for microcomputers are becoming increasingly easy to use, allowing small school districts to develop their own autonomous databases for tracking enrollment and student progress in special education. DBMS applications can be designed for maintenance by district personnel with little technical…

  9. A European Flood Database: facilitating comprehensive flood research beyond administrative boundaries

    Directory of Open Access Journals (Sweden)

    J. Hall

    2015-06-01

    Full Text Available The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.

  10. Microcomputer Database Management Systems that Interface with Online Public Access Catalogs.

    Science.gov (United States)

    Rice, James

    1988-01-01

    Describes a study that assessed the availability and use of microcomputer database management interfaces to online public access catalogs. The software capabilities needed to effect such an interface are identified, and available software packages are evaluated by these criteria. A directory of software vendors is provided. (4 notes with…

  11. A survey of the use of database management systems in accelerator projects

    CERN Document Server

    Poole, John

    1995-01-01

    The International Accelerator Database Group (IADBG) was set up in 1994 to bring together the people who are working with databases in accelerator laboratories so that they can exchange information and experience. The group now has members from more than 20 institutes from all around the world, representing nearly double this number of projects. This paper is based on the information gathered by the IADBG and describes why commercial DataBase Management Systems (DBMS) are being used in accelerator projects and what they are being used for. Initially introduced to handle equipment builders' data, commercial DBMS are now being used in almost all areas of accelerators from on-line control to personnel data. A variety of commercial systems are being used in conjunction with a diverse selection of application software for data maintenance/manipulation and controls. This paper reviews the database activities known to IADBG.

  12. Database and Analytical Tool Development for the Management of Data Derived from US DOE (NETL) Funded Fine Particulate (PM2.5) Research

    Energy Technology Data Exchange (ETDEWEB)

    Robinson Khosah

    2007-07-31

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.

  13. Modelling a critical infrastructure-driven spatial database for proactive disaster management: A developing country context

    Directory of Open Access Journals (Sweden)

    David O. Baloye

    2016-04-01

    Full Text Available The understanding and institutionalisation of the seamless link between urban critical infrastructure and disaster management has greatly helped the developed world to establish effective disaster management processes. However, this link is conspicuously missing in developing countries, where disaster management has been more reactive than proactive. The consequence of this is typified in poor response time and uncoordinated ways in which disasters and emergency situations are handled. As is the case with many Nigerian cities, the challenges of urban development in the city of Abeokuta have limited the effectiveness of disaster and emergency first responders and managers. Using geospatial techniques, the study attempted to design and deploy a spatial database running a web-based information system to track the characteristics and distribution of critical infrastructure for effective use during disaster and emergencies, with the purpose of proactively improving disaster and emergency management processes in Abeokuta. Keywords: Disaster Management; Emergency; Critical Infrastructure; Geospatial Database; Developing Countries; Nigeria

  14. National Levee Database: monitoring, vulnerability assessment and management in Italy

    Science.gov (United States)

    Barbetta, Silvia; Camici, Stefania; Maccioni, Pamela; Moramarco, Tommaso

    2015-04-01

    A properly designed and constructed levees system can often be an effective device for repelling floodwaters and provide barriers against inundation to protect urbanized and industrial areas. However, the delineation of flooding-prone areas and the related hydraulic hazard mapping taking account of uncertainty (Apel et al., 2008) are usually developed with a scarce consideration of the possible occurrence of levee failures along river channels (Mazzoleni et al., 2014). Indeed, it is well known that flooding is frequently the result of levee failures that can be triggered by several factors, as: (1) overtopping, (2) scouring of the foundation, (3) seepage/piping of levee body/foundation, and (4) sliding of the foundation. Among these failure mechanisms that are influenced by the levee's geometrical configuration, hydraulic conditions (e.g. river level and seepage), and material properties (e.g. permeability, cohesion, porosity, compaction), the piping caused by seepage (ICOLD, http://www.icold-cigb.org) is considered one of the most dominant levee failure mechanisms (Colleselli F., 1994; Wallingford H. R., 2003). The difficulty of estimating the hydraulic parameters to properly describe the seepage line within the body and foundation of the levee implies that the study of the critical flood wave routing is typically carried out by assuming that the levee system is undamaged during the flood event. In this context, implementing and making operational a National Levee Database (NLD), effectively structured and continuously updated, becomes fundamental to have a searchable inventory of information about levees available as a key resource supporting decisions and actions affecting levee safety. The ItaliaN LEvee Database (INLED) has been recently developed by the Research Institute for Geo-Hydrological Protection (IRPI) for the Civil Protection Department of the Presidency of Council of Ministers. INLED has the main focus of collecting comprehensive information about

  15. Open-access MIMIC-II database for intensive care research.

    Science.gov (United States)

    Lee, Joon; Scott, Daniel J; Villarroel, Mauricio; Clifford, Gari D; Saeed, Mohammed; Mark, Roger G

    2011-01-01

    The critical state of intensive care unit (ICU) patients demands close monitoring, and as a result a large volume of multi-parameter data is collected continuously. This represents a unique opportunity for researchers interested in clinical data mining. We sought to foster a more transparent and efficient intensive care research community by building a publicly available ICU database, namely Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II). The data harnessed in MIMIC-II were collected from the ICUs of Beth Israel Deaconess Medical Center from 2001 to 2008 and represent 26,870 adult hospital admissions (version 2.6). MIMIC-II consists of two major components: clinical data and physiological waveforms. The clinical data, which include patient demographics, intravenous medication drip rates, and laboratory test results, were organized into a relational database. The physiological waveforms, including 125 Hz signals recorded at bedside and corresponding vital signs, were stored in an open-source format. MIMIC-II data were also deidentified in order to remove protected health information. Any interested researcher can gain access to MIMIC-II free of charge after signing a data use agreement and completing human subjects training. MIMIC-II can support a wide variety of research studies, ranging from the development of clinical decision support algorithms to retrospective clinical studies. We anticipate that MIMIC-II will be an invaluable resource for intensive care research by stimulating fair comparisons among different studies.

  16. Database for landscape-scale carbon monitoring sites

    Science.gov (United States)

    Jason A. Cole; Kristopher D. Johnson; Richard A. Birdsey; Yude Pan; Craig A. Wayson; Kevin McCullough; Coeli M. Hoover; David Y. Hollinger; John B. Bradford; Michael G. Ryan; Randall K. Kolka; Peter Wieshampel; Kenneth L. Clark; Nicholas S. Skowronski; John Hom; Scott V. Ollinger; Steven G. McNulty; Michael J. Gavazzi

    2013-01-01

    This report describes the database used to compile, store, and manage intensive ground-based biometric data collected at research sites in Colorado, Minnesota, New Hampshire, New Jersey, North Carolina, and Wyoming, supporting research activities of the U.S. North American Carbon Program (NACP). This report also provides details of each site, the sampling design and...

  17. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  18. ALARA database value in future outage work planning and dose management

    International Nuclear Information System (INIS)

    Miller, D.W.; Green, W.H.

    1995-01-01

    ALARA database encompassing job-specific duration and man-rem plant specific information over three refueling outages represents an invaluable tool for the outage work planner and ALARA engineer. This paper describes dose-management trends emerging based on analysis of three refueling outages at Clinton Power Station. Conclusions reached based on hard data available from a relational database dose-tracking system is a valuable tool for planning of future outage work. The system's ability to identify key problem areas during a refueling outage is improving as more outage comparative data becomes available. Trends over a three outage period are identified in this paper in the categories of number and type of radiation work permits implemented, duration of jobs, projected vs. actual dose rates in work areas, and accuracy of outage person-rem projection. The value of the database in projecting 1 and 5 year station person-rem estimates is discussed

  19. ALARA database value in future outage work planning and dose management

    Energy Technology Data Exchange (ETDEWEB)

    Miller, D.W.; Green, W.H. [Clinton Power Station Illinois Power Co., IL (United States)

    1995-03-01

    ALARA database encompassing job-specific duration and man-rem plant specific information over three refueling outages represents an invaluable tool for the outage work planner and ALARA engineer. This paper describes dose-management trends emerging based on analysis of three refueling outages at Clinton Power Station. Conclusions reached based on hard data available from a relational database dose-tracking system is a valuable tool for planning of future outage work. The system`s ability to identify key problem areas during a refueling outage is improving as more outage comparative data becomes available. Trends over a three outage period are identified in this paper in the categories of number and type of radiation work permits implemented, duration of jobs, projected vs. actual dose rates in work areas, and accuracy of outage person-rem projection. The value of the database in projecting 1 and 5 year station person-rem estimates is discussed.

  20. Research on technological assessment for ageing management of commercial reprocessing plant

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The research program has been carried out to provide technical database and review manuals for evaluating the adequacy of Tokai plant's Ageing Management report from F.Y.2006 to F.Y.2011. In the near future, the regulator will be to carry out the evaluation of Rokkasho plant, which is commercial-size plant mainly designed by the technologies in Britain and France. This plant is different from Tokai plant in a technique of the prevention of corrosion. The purpose of the research program is to provide supplementary database and improve the review manual in order to evaluate the adequacy of Rokkasho plant's Ageing Management report. We selected three experimental subjects on ageing phenomena listed bellow in this program on the basis of the result of the operational experience of the foreign plants and the related previous studies. Effect of solid deposit adhering on the metal, Np(VI) and nitrous ions in solution on the corrosion of stainless steel made reduced pressure evaporator in boiling nitric acid solutions, Conditions of initiation of Stress Corrosion Cracking of the zirconium made components in boiling nitric acid solutions including highly-concentrated plutonium, Conditions of initiation of Hydrogen Embrittlement Stress Corrosion Cracking of Pipe Fittings Composed of Zirconium, Tantalum and Stainless Steel in high radioactive nitric acid solutions. (author)

  1. DEVELOPMENT OF A METADATA MANAGEMENT SYSTEM FOR AN INTERDISCIPLINARY RESEARCH PROJECT

    Directory of Open Access Journals (Sweden)

    C. Curdt

    2012-07-01

    Full Text Available In every interdisciplinary, long-term research project it is essential to manage and archive all heterogeneous research data, produced by the project participants during the project funding. This has to include sustainable storage, description with metadata, easy and secure provision, back up, and visualisation of all data. To ensure the accurate description of all project data with corresponding metadata, the design and implementation of a metadata management system is a significant duty. Thus, the sustainable use and search of all research results during and after the end of the project is particularly dependent on the implementation of a metadata management system. Therefore, this paper will describe the practical experiences gained during the development of a scientific research data management system (called the TR32DB including the corresponding metadata management system for the multidisciplinary research project Transregional Collaborative Research Centre 32 (CRC/TR32 'Patterns in Soil-Vegetation-Atmosphere Systems'. The entire system was developed according to the requirements of the funding agency, the user and project requirements, as well as according to recent standards and principles. The TR32DB is basically a combination of data storage, database, and web-interface. The metadata management system was designed, realized, and implemented to describe and access all project data via accurate metadata. Since the quantity and sort of descriptive metadata depends on the kind of data, a user-friendly multi-level approach was chosen to cover these requirements. Thus, the self-developed CRC/TR32 metadata framework is designed. It is a combination of general, CRC/TR32 specific, as well as data type specific properties.

  2. Software configuration management plan for the TWRS controlled baseline database system [TCBD

    International Nuclear Information System (INIS)

    Spencer, S.G.

    1998-01-01

    LHMC, TWRS Business Management Organization (BMO) is designated as system owner, operator, and maintenance authority. The TWAS BMO identified the need for the TCBD. The TWRS BMO users have established all requirements for the database and are responsible for maintaining database integrity and control (after the interface data has been received). Initial interface data control and integrity is maintained through functional and administrative processes and is the responsibility of the database owners who are providing the data. The specific groups within the TWRS BMO affected by this plan are the Financial Management and TWRS Management Support Project, Master Planning, and the Financial Control Integration and Reporting. The interfaces between these organizations are through normal line management chain of command. The Master Planning Group is assigned the responsibility to continue development and maintenance of the TCBD. This group maintains information that includes identification of requirements and changes to those requirements in a TCBD project file. They are responsible for the issuance, maintenance, and change authority of this SCW. LHMC, TWRS TCBD Users are designated as providing the project's requirement changes for implementation and also testing of the TCBD during development. The Master Planning Group coordinates and monitors the user's requests for system requirements (new/existing) as well as beta and acceptance testing. Users are those individuals and organizations needing data or information from the TCBD and having both a need-to-know and the proper training and authority to access the database. Each user or user organization is required to comply with the established requirements and procedures governing the TCBD. Lockheed Martin Services, Inc. (LMSI) is designated the TCBD developer, maintainer, and custodian until acceptance and process testing of the system has been completed via the TWRS BMO. Once this occurs, the TCBD will be completed and

  3. Software configuration management plan for the Hanford site technical database

    International Nuclear Information System (INIS)

    GRAVES, N.J.

    1999-01-01

    The Hanford Site Technical Database (HSTD) is used as the repository/source for the technical requirements baseline and programmatic data input via the Hanford Site and major Hanford Project Systems Engineering (SE) activities. The Hanford Site SE effort has created an integrated technical baseline for the Hanford Site that supports SE processes at the Site and project levels which is captured in the HSTD. The HSTD has been implemented in Ascent Logic Corporation (ALC) Commercial Off-The-Shelf (COTS) package referred to as the Requirements Driven Design (RDD) software. This Software Configuration Management Plan (SCMP) provides a process and means to control and manage software upgrades to the HSTD system

  4. Supporting Telecom Business Processes by means of Workflow Management and Federated Databases

    NARCIS (Netherlands)

    Nijenhuis, Wim; Jonker, Willem; Grefen, P.W.P.J.

    This report addresses the issues related to the use of workflow management systems and federated databases to support business processes that operate on large and heterogeneous collections of autonomous information systems. We discuss how they can enhance the overall IT-architecture. Starting from

  5. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  6. Database and Analytical Tool Development for the Management of Data Derived from US DOE (NETL) Funded Fine Particulate (PM2.5) Research

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Frank T. Alex

    2007-02-11

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.

  7. DATABASE AND ANALYTICAL TOOL DEVELOPMENT FOR THE MANAGEMENT OF DATA DERIVED FROM US DOE (NETL) FUNDED FINE PARTICULATE (PM2.5) RESEARCH

    Energy Technology Data Exchange (ETDEWEB)

    Robinson P. Khosah; Charles G. Crawford

    2003-03-13

    Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analytical tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its sixth month of Phase

  8. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    International Nuclear Information System (INIS)

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; Niekerk, Johannes van; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The Protein Information Management System (PiMS) is described together with a discussion of how its features make it well suited to laboratories of all sizes. The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service

  9. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Chris [STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Pajon, Anne [Wellcome Trust Genome Campus, Hinxton CB10 1SD (United Kingdom); Griffiths, Susanne L. [University of York, Heslington, York YO10 5DD (United Kingdom); Daniel, Ed [STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Savitsky, Marc [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Lin, Bill [STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Diprose, Jonathan M. [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Wilter da Silva, Alan [Wellcome Trust Genome Campus, Hinxton CB10 1SD (United Kingdom); Pilicheva, Katya [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Troshin, Peter [STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Niekerk, Johannes van [University of Dundee, Dundee DD1 5EH, Scotland (United Kingdom); Isaacs, Neil [University of Glasgow, Glasgow G12 8QQ, Scotland (United Kingdom); Naismith, James [University of St Andrews, St Andrews, Fife KY16 9ST, Scotland (United Kingdom); Nave, Colin; Blake, Richard [STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Wilson, Keith S. [University of York, Heslington, York YO10 5DD (United Kingdom); Stuart, David I. [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Henrick, Kim [Wellcome Trust Genome Campus, Hinxton CB10 1SD (United Kingdom); Esnouf, Robert M., E-mail: robert@strubi.ox.ac.uk [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom)

    2011-04-01

    The Protein Information Management System (PiMS) is described together with a discussion of how its features make it well suited to laboratories of all sizes. The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  10. Design and Implementation of a Research Data Management System: The CRC/TR32 Project Database (TR32DB)

    OpenAIRE

    Curdt, Constanze

    2014-01-01

    Research data management (RDM) includes all processes and measures which ensure that research data are well-organised, documented, preserved, stored, backed up, accessible, available, and re-usable. Corresponding RDM systems or repositories form the technical framework to support the collection, accurate documentation, storage, back-up, sharing, and provision of research data, which are created in a specific environment, like a research group or institution. The required measures for the impl...

  11. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  12. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  13. Preparation of Database for Land use Management in North East of Cairo

    International Nuclear Information System (INIS)

    El-Ghawaby, A.M.

    2012-01-01

    Environmental management in urban areas is difficult due to the amount and miscellaneous data needed for decision making. This amount of data is splendid without adequate database systems and modern methodologies. A geo-database building for East Cairo City Area (ECCA) is built to be used in the process of urban land-use suitability to achieve better performance compared with usual methods used. This Geo-database has required availability of detailed, accurate, updated and geographically referenced data on its terrain physical characteristics and its expected environmental hazards that may occur. A smart environmental suitability model for ECCA is developed and implemented using ERDAS IMAGINE 9.2. This model is capable of suggesting the more appropriate urban land-use, based on the existing spatial and non-spatial potentials and constraints.

  14. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  15. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  16. Information Management in Creative Engineering Design and Capabilities of Database Transactions

    DEFF Research Database (Denmark)

    Jacobsen, Kim; Eastman, C. A.; Jeng, T. S.

    1997-01-01

    This paper examines the information management requirements and sets forth the general criteria for collaboration and concurrency control in creative engineering design. Our work attempts to recognize the full range of concurrency, collaboration and complex transactions structure now practiced...... in manual and semi-automated design and the range of capabilities needed as the demands for enhanced but flexible electronic information management unfolds.The objective of this paper is to identify new issues that may advance the use of databases to support creative engineering design. We start...... with a generalized description of the structure of design tasks and how information management in design is dealt with today. After this review, we identify extensions to current information management capabilities that have been realized and/or proposed to support/augment what designers can do now. Given...

  17. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  18. Coordinating Mobile Databases: A System Demonstration

    OpenAIRE

    Zaihrayeu, Ilya; Giunchiglia, Fausto

    2004-01-01

    In this paper we present the Peer Database Management System (PDBMS). This system runs on top of the standard database management system, and it allows it to connect its database with other (peer) databases on the network. A particularity of our solution is that PDBMS allows for conventional database technology to be effectively operational in mobile settings. We think of database mobility as a database network, where databases appear and disappear spontaneously and their network access point...

  19. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  20. CERN pushes the envelope with Oracle9i database

    CERN Multimedia

    2001-01-01

    Oracle Corp. today announced that unique capabilities in Oracle9i Database are helping CERN, the European Organization for Nuclear Research in Geneva. The LHC project will generate petabytes of data - an amount well beyond the capability of any relational database technology today. CERN is developing a new route in data management and analysis using Oracle9i Real Application Cluster technology.

  1. Development of a combined database for meta-epidemiological research

    DEFF Research Database (Denmark)

    Savović, Jelena; Harris, Ross J; Wood, Lesley

    2010-01-01

    or review. Unique identifiers were assigned to each reference and used to identify duplicate trials. Sets of meta-analyses with overlapping trials were identified and duplicates removed. Overlapping trials were used to examine agreement between assessments of trial characteristics. The combined database...... database will be used to examine the combined evidence on sources of bias in randomized controlled trials. The strategy used to remove overlap between meta-analyses may be of use for future empirical research. Copyright © 2010 John Wiley & Sons, Ltd.......Collections of meta-analyses assembled in meta-epidemiological studies are used to study associations of trial characteristics with intervention effect estimates. However, methods and findings are not consistent across studies. To combine data from 10 meta-epidemiological studies into a single...

  2. The image database management system of teaching file using personal computer

    International Nuclear Information System (INIS)

    Shin, M. J.; Kim, G. W.; Chun, T. J.; Ahn, W. H.; Baik, S. K.; Choi, H. Y.; Kim, B. G.

    1995-01-01

    For the systemic management and easy using of teaching file in radiology department, the authors tried to do the setup of a database management system of teaching file using personal computer. We used a personal computer (IBM PC compatible, 486DX2) including a image capture card(Window vision, Dooin Elect, Seoul, Korea) and video camera recorder (8mm, CCD-TR105, Sony, Tokyo, Japan) for the acquisition and storage of images. We developed the database program by using Foxpro for Window 2.6(Microsoft, Seattle, USA) executed in the Window 3.1 (Microsoft, Seattle, USA). Each datum consisted of hospital number, name, sex, age, examination date, keyword, radiologic examination modalities, final diagnosis, radiologic findings, references and representative images. The images were acquired and stored as bitmap format (8 bitmap, 540 X 390 ∼ 545 X 414, 256 gray scale) and displayed on the 17 inch-flat monitor(1024 X 768, Samtron, Seoul, Korea). Without special devices, the images acquisition and storage could be done on the reading viewbox, simply. The image quality on the computer's monitor was less than the one of original film on the viewbox, but generally the characteristics of each lesions could be differentiated. Easy retrieval of data was possible for the purpose of teaching file system. Without high cost appliances, we could consummate the image database system of teaching file using personal computer with relatively inexpensive method

  3. Computer networks for financial activity management, control and statistics of databases of economic administration at the Joint Institute for Nuclear Research

    International Nuclear Information System (INIS)

    Tyupikova, T.V.; Samoilov, V.N.

    2003-01-01

    Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research

  4. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  5. The Knowledge Management Research of Agricultural Scientific Research Institution

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the perception of knowledge management from experts specializing in different fields,and experts at home and abroad,the knowledge management of agricultural scientific research institution can build new platform,offer new approach for realization of explicit or tacit knowledge,and promote resilience and innovative ability of scientific research institution.The thesis has introduced functions of knowledge management research of agricultural science.First,it can transform the tacit knowledge into explicit knowledge.Second,it can make all the scientific personnel share knowledge.Third,it is beneficial to the development of prototype system of knowledge management.Fourth,it mainly researches the realization of knowledge management system.Fifth,it can manage the external knowledge via competitive intelligence.Sixth,it can foster talents of knowledge management for agricultural scientific research institution.Seventh,it offers the decision-making service for leaders to manage scientific program.The thesis also discusses the content of knowledge management of agricultural scientific research institution as follows:production and innovation of knowledge;attainment and organizing of knowledge;dissemination and share of knowledge;management of human resources and the construction and management of infrastructure.We have put forward corresponding countermeasures to further reinforce the knowledge management research of agricultural scientific research institution.

  6. FJET Database Project: Extract, Transform, and Load

    Science.gov (United States)

    Samms, Kevin O.

    2015-01-01

    The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.

  7. ORACLE DATABASE SECURITY

    OpenAIRE

    Cristina-Maria Titrade

    2011-01-01

    This paper presents some security issues, namely security database system level, data level security, user-level security, user management, resource management and password management. Security is a constant concern in the design and database development. Usually, there are no concerns about the existence of security, but rather how large it should be. A typically DBMS has several levels of security, in addition to those offered by the operating system or network. Typically, a DBMS has user a...

  8. Fiscal 1997 research report. Basic research project on improving energy consumption efficiency in developing countries (Database construction); 1998 nendo hatten tojokoku energy shohi koritsuka kiso chosa jigyo hokokusho. Database kochiku jigyo

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-09-01

    The New Energy and Industrial Technology Development Organization (NEDO) in fiscal 1993 started a database construction project, which involves energy conservation related primary information on the 11 countries concerned, for encouraging 11 Asian countries, namely, Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan, Korea, Vietnam, Myanmar, and Pakistan, to promote their energy conservation endeavors. As part of the database construction effort under this research project, the so-far accomplished collection of and analysis into energy related information about the countries, surveys of the utilization and popularization of databases, and development of database systems are taken into consideration. On the basis of these efforts to improve on the database systems for enhanced operability, a program is formulated for database diffusion under which data are collected and updated for storage in databases. Also exerted under the program are endeavors to make use of the above-said database systems and to disseminate the constructed databases into the 11 countries for effective utilization. In the future, it is desired that the NEDO database will win popularity in the 11 countries and be utilized in their formulation of domestic energy conservation policies. (NEDO)

  9. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  10. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  11. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  12. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  13. Preliminary study for unified management of CANDU safety codes and construction of database system

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae

    2003-03-01

    It is needed to develop the Graphical User Interface(GUI) for the unified management of CANDU safety codes and to construct database system for the validation of safety codes, for which the preliminary study is done in the first stage of the present work. The input and output structures and data flow of CATHENA and PRESCON2 are investigated and the interaction of the variables between CATHENA and PRESCON2 are identified. Furthermore, PC versions of CATHENA and PRESCON2 codes are developed for the interaction of these codes and GUI(Graphic User Interface). The PC versions are assessed by comparing the calculation results with those by HP workstation or from FSAR(Final Safety Analysis Report). Preliminary study on the GUI for the safety codes in the unified management system are done. The sample of GUI programming is demonstrated preliminarily. Visual C++ is selected as the programming language for the development of GUI system. The data for Wolsong plants, reactor core, and thermal-hydraulic experiments executed in the inside and outside of the country, are collected and classified following the structure of the database system, of which two types are considered for the final web-based database system. The preliminary GUI programming for database system is demonstrated, which is updated in the future work

  14. Drug residues in urban water: A database for ecotoxicological risk management.

    Science.gov (United States)

    Destrieux, Doriane; Laurent, François; Budzinski, Hélène; Pedelucq, Julie; Vervier, Philippe; Gerino, Magali

    2017-12-31

    Human-use drug residues (DR) are only partially eliminated by waste water treatment plants (WWTPs), so that residual amounts can reach natural waters and cause environmental hazards. In order to properly manage these hazards in the aquatic environment, a database is made available that integrates the concentration ranges for DR, which cause adverse effects for aquatic organisms, and the temporal variations of the ecotoxicological risks. To implement this database for the ecotoxicological risk assessment (ERA database), the required information for each DR is the predicted no effect concentrations (PNECs), along with the predicted environmental concentrations (PECs). The risk assessment is based on the ratio between the PNECs and the PECs. Adverse effect data or PNECs have been found in the publicly available literature for 45 substances. These ecotoxicity test data have been extracted from 125 different sources. This ERA database contains 1157 adverse effect data and 287 PNECs. The efficiency of this ERA database was tested with a data set coming from a simultaneous survey of WWTPs and the natural environment. In this data set, 26 DR were searched for in two WWTPs and in the river. On five sampling dates, concentrations measured in the river for 10 DR could pose environmental problems of which 7 were measured only downstream of WWTP outlets. From scientific literature and measurements, data implementation with unit homogenisation in a single database facilitates the actual ecotoxicological risk assessment, and may be useful for further risk coming from data arising from the future field survey. Moreover, the accumulation of a large ecotoxicity data set in a single database should not only improve knowledge of higher risk molecules but also supply an objective tool to help the rapid and efficient evaluation of the risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Routine health insurance data for scientific research: potential and limitations of the Agis Health Database.

    Science.gov (United States)

    Smeets, Hugo M; de Wit, Niek J; Hoes, Arno W

    2011-04-01

    Observational studies performed within routine health care databases have the advantage of their large size and, when the aim is to assess the effect of interventions, can offer a completion to randomized controlled trials with usually small samples from experimental situations. Institutional Health Insurance Databases (HIDs) are attractive for research because of their large size, their longitudinal perspective, and their practice-based information. As they are based on financial reimbursement, the information is generally reliable. The database of one of the major insurance companies in the Netherlands, the Agis Health Database (AHD), is described in detail. Whether the AHD data sets meet the specific requirements to conduct several types of clinical studies is discussed according to the classification of the four different types of clinical research; that is, diagnostic, etiologic, prognostic, and intervention research. The potential of the AHD for these various types of research is illustrated using examples of studies recently conducted in the AHD. HIDs such as the AHD offer large potential for several types of clinical research, in particular etiologic and intervention studies, but at present the lack of detailed clinical information is an important limitation. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  17. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  18. Prototyping visual interface for maintenance and supply databases

    OpenAIRE

    Fore, Henry Ray

    1989-01-01

    Approved for public release; distribution is unlimited This research examined the feasibility of providing a visual interface to standard Army Management Information Systems at the unit level. The potential of improving the Human-Machine Interface of unit level maintenance and supply software, such as ULLS (Unit Level Logistics System), is very attractive. A prototype was implemented in GLAD (Graphics Language for Database). GLAD is a graphics object-oriented environment for databases t...

  19. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  20. PARPs database: A LIMS systems for protein-protein interaction data mining or laboratory information management system

    Directory of Open Access Journals (Sweden)

    Picard-Cloutier Aude

    2007-12-01

    Full Text Available Abstract Background In the "post-genome" era, mass spectrometry (MS has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5.

  1. The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-01-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  2. The Kepler DB: a database management system for arrays, sparse arrays, and binary data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-07-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  3. Study on Mandatory Access Control in a Secure Database Management System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation-hierarchical data model is extended to multilevel relation-hierarchical data model. Based on the multilevel relation-hierarchical data model, the concept of upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation-hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects (e. g., multilevel spatial data) and multilevel conventional data ( e. g., integer. real number and character string).

  4. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Directory of Open Access Journals (Sweden)

    Surasak Saokaew

    Full Text Available Health technology assessment (HTA has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced.Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided.Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources.Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  5. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Science.gov (United States)

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  6. Astronomy Education Research Observations from the iSTAR international Study of Astronomical Reasoning Database

    Science.gov (United States)

    Tatge, C. B.; Slater, S. J.; Slater, T. F.; Schleigh, S.; McKinnon, D.

    2016-12-01

    Historically, an important part of the scientific research cycle is to situate any research project within the landscape of the existing scientific literature. In the field of discipline-based astronomy education research, grappling with the existing literature base has proven difficult because of the difficulty in obtaining research reports from around the world, particularly early ones. In order to better survey and efficiently utilize the wide and fractured range and domain of astronomy education research methods and results, the iSTAR international Study of Astronomical Reasoning database project was initiated. The project aims to host a living, online repository of dissertations, theses, journal articles, and grey literature resources to serve the world's discipline-based astronomy education research community. The first domain of research artifacts ingested into the iSTAR database were doctoral dissertations. To the authors' great surprise, nearly 300 astronomy education research dissertations were found from the last 100-years. Few, if any, of the literature reviews from recent astronomy education dissertations surveyed even come close to summarizing this many dissertations, most of which have not been published in traditional journals, as re-publishing one's dissertation research as a journal article was not a widespread custom in the education research community until recently. A survey of the iSTAR database dissertations reveals that the vast majority of work has been largely quantitative in nature until the last decade. We also observe that modern-era astronomy education research writings reaches as far back as 1923 and that the majority of dissertations come from the same eight institutions. Moreover, most of the astronomy education research work has been done covering learners' grasp of broad knowledge of astronomy rather than delving into specific learning targets, which has been more in vogue during the last two decades. The surprisingly wide breadth

  7. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  8. Management Information Systems Research.

    Science.gov (United States)

    Research on management information systems is illusive in many respects. Part of the basic research problem in MIS stems from the absence of standard...decision making. But the transition from these results to the realization of ’satisfactory’ management information systems remains difficult indeed. The...paper discusses several aspects of research on management information systems and reviews a selection of efforts that appear significant for future progress. (Author)

  9. Management Guidelines for Database Developers' Teams in Software Development Projects

    Science.gov (United States)

    Rusu, Lazar; Lin, Yifeng; Hodosi, Georg

    Worldwide job market for database developers (DBDs) is continually increasing in last several years. In some companies, DBDs are organized as a special team (DBDs team) to support other projects and roles. As a new role, the DBDs team is facing a major problem that there are not any management guidelines for them. The team manager does not know which kinds of tasks should be assigned to this team and what practices should be used during DBDs work. Therefore in this paper we have developed a set of management guidelines, which includes 8 fundamental tasks and 17 practices from software development process, by using two methodologies Capability Maturity Model (CMM) and agile software development in particular Scrum in order to improve the DBDs team work. Moreover the management guidelines developed here has been complemented with practices from authors' experience in this area and has been evaluated in the case of a software company. The management guidelines for DBD teams presented in this paper could be very usefully for other companies too that are using a DBDs team and could contribute towards an increase of the efficiency of these teams in their work on software development projects.

  10. The Future of Asset Management for Human Space Exploration: Supply Classification and an Integrated Database

    Science.gov (United States)

    Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert

    2006-01-01

    One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.

  11. Database system for management of health physics and industrial hygiene records

    International Nuclear Information System (INIS)

    Murdoch, B. T.; Blomquist, J. A.; Cooke, R. H.; Davis, J. T.; Davis, T. M.; Dolecek, E. H.; Halka-Peel, L.; Johnson, D.; Keto, D. N.; Reyes, L. R.; Schlenker, R. A.; Woodring; J. L.

    1999-01-01

    This paper provides an overview of the Worker Protection System (WPS), a client/server, Windows-based database management system for essential radiological protection and industrial hygiene. Seven operational modules handle records for external dosimetry, bioassay/internal dosimetry, sealed sources, routine radiological surveys, lasers, workplace exposure, and respirators. WPS utilizes the latest hardware and software technologies to provide ready electronic access to a consolidated source of worker protection

  12. The HyMeX database

    Science.gov (United States)

    Brissebrat, Guillaume; Mastrorillo, Laurence; Ramage, Karim; Boichard, Jean-Luc; Cloché, Sophie; Fleury, Laurence; Klenov, Ludmila; Labatut, Laurent; Mière, Arnaud

    2013-04-01

    The international HyMeX (HYdrological cycle in the Mediterranean EXperiment) project aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean, with emphasis on high-impact weather events, inter-annual to decadal variability of the Mediterranean coupled system, and associated trends in the context of global change. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data, modelling studies, as well as post event field surveys and value-added products processing. Therefore HyMeX database incorporates various dataset types from different disciplines, either operational or research. The database relies on a strong collaboration between OMP and IPSL data centres. Field data, which are 1D time series, maps or pictures, are managed by OMP team while gridded data (satellite products, model outputs, radar data...) are managed by IPSL team. At present, the HyMeX database contains about 150 datasets, including 80 hydrological, meteorological, ocean and soil in situ datasets, 30 radar datasets, 15 satellite products, 15 atmosphere, ocean and land surface model outputs from operational (re-)analysis or forecasts and from research simulations, and 5 post event survey datasets. The data catalogue complies with international standards (ISO 19115; INSPIRE; Directory Interchange Format; Global Change Master Directory Thesaurus). It includes all the datasets stored in the HyMeX database, as well as external datasets relevant for the project. All the data, whatever the type is, are accessible through a single gateway. The database website http://mistrals.sedoo.fr/HyMeX offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by

  13. Enhancements to the Redmine Database Metrics Plug in

    Science.gov (United States)

    2017-08-01

    management web application has been adopted within the US Army Research Laboratory’s Computational and Information Sciences Directorate as a database...project management web application.∗ The Redmine plug-in† enabled the use of the numerous, powerful features of the web application. The many...distribution is unlimited. 2 • Selectable export of citations/references by type, writing style , and FY • Enhanced naming convention options for

  14. 2008 Availability and Utilization of Electronic Information Databases ...

    African Journals Online (AJOL)

    Gbaje E.S

    electronic information databases include; research work, to update knowledge in their field of interest and Current awareness. ... be read by a computer device. CD ROMs are ... business and government innovation. Its ... technologies, ideas and management practices ..... sources of information and storage devices bring.

  15. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  16. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    Science.gov (United States)

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  17. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  18. A database for reproducible manipulation research: CapriDB – Capture, Print, Innovate

    Directory of Open Access Journals (Sweden)

    Florian T. Pokorny

    2017-04-01

    Full Text Available We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB – an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

  19. Databases and bookkeeping for HEP experiments

    International Nuclear Information System (INIS)

    Blobel, V.; Cnops, A.-M.; Fisher, S.M.

    1983-09-01

    The term database is explained as well as the requirements for data bases in High Energy physics (HEP). Also covered are the packages used in HEP, summary of user experience, database management systems, relational database management systems for HEP use and observations. (U.K.)

  20. Microsoft Access Small Business Solutions State-of-the-Art Database Models for Sales, Marketing, Customer Management, and More Key Business Activities

    CERN Document Server

    Hennig, Teresa; Linson, Larry; Purvis, Leigh; Spaulding, Brent

    2010-01-01

    Database models developed by a team of leading Microsoft Access MVPs that provide ready-to-use solutions for sales, marketing, customer management and other key business activities for most small businesses. As the most popular relational database in the world, Microsoft Access is widely used by small business owners. This book responds to the growing need for resources that help business managers and end users design and build effective Access database solutions for specific business functions. Coverage includes::; Elements of a Microsoft Access Database; Relational Data Model; Dealing with C

  1. Development of a marketing strategy for the Coal Research Establishment`s emissions monitoring database

    Energy Technology Data Exchange (ETDEWEB)

    Beer, A.D.; Hughes, I.S.C. [British Coal Corporation, Stoke Orchard (United Kingdom). Coal Research Establishment

    1995-06-01

    A summary is presented of the results of work conducted by the UK`s Coal Research Establishment (CRE) between April 1994 and December 1994 following the completion of a project on the utilisation and publication of an emissions monitoring database. The database contains emissions data for most UK combustion plant, gathered over the past 10 years. The aim of this further work was to identify the strengths and weaknesses of CRE`s database, to investigate potential additional sources of data, and to develop a strategy for marketing the information contained within the database to interested parties. 3 figs.

  2. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  3. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    Science.gov (United States)

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the

  4. JAERI Material Performance Database (JMPD); outline of the system

    International Nuclear Information System (INIS)

    Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime.

    1991-01-01

    JAERI Material Performance Database (JMPD) has been developed since 1986 in JAERI with a view to utilizing the various kinds of characteristic data of nuclear materials efficiently. Management system of relational database, PLANNER was employed and supporting systems for data retrieval and output were expanded. JMPD is currently serving the following data; (1) Data yielded from the research activities of JAERI including fatigue crack growth data of LWR pressure vessel materials as well as creep and fatigue data of the alloy developed for the High Temperature Gas-cooled Reactor (HTGR), Hastelloy XR. (2) Data of environmentally assisted cracking of LWR materials arranged by Electric power Research Institute (EPRI) including fatigue crack growth data (3000 tests), stress corrosion data (500 tests) and Slow Strain Rate Technique (SSRT) data (1000 tests). In order to improve user-friendliness of retrieval system, the menu selection type procedures have been developed where knowledge of system and data structures are not required for end-users. In addition a retrieval via database commands, Structured Query Language (SQL), is supported by the relational database management system. In JMPD the retrieved data can be processed readily through supporting systems for graphical and statistical analyses. The present report outlines JMPD and describes procedures for data retrieval and analyses by utilizing JMPD. (author)

  5. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  6. Scientific Research Database of the 2008 Ms8.0 Wenchuan Earthquake

    Science.gov (United States)

    Liang, C.; Yang, Y.; Yu, Y.

    2013-12-01

    Nearly 5 years after the 2008 Ms8.0 Wenchuan Earthquake, the Ms7.0 Lushan earthquake stroke 70km away along the same fault system. Given the tremendous life loss and property damages as well as the short time and distance intervals between the two large magnitude events, the scientific probing into their causing factors and future seismic activities in the nearby region will continue to be in the center of earthquake research in China and even the world for years to come. In the past five years, scientists have made significant efforts to study the Wenchuan earthquake from various aspects using different datasets and methods. Their studies cover a variety of topics including seismogenic environment, earthquake precursors, rupture process, co-seismic phenomenon, hazard relief, reservoir induced seismicity and more. These studies have been published in numerous journals in Chinese, English and many other languages. In addition, 54 books regarding to this earthquake have been published. The extremely diversified nature of all publications makes it very difficult and time-consuming, if not impossible, to sort out information needed by individual researcher in an efficient way. An information platform that collects relevant scientific information and makes them accessible in various ways can be very handy. With this mission in mind, the Earthquake Research Group in the Chengdu University of Technology has developed a website www.wceq.org to attack this target: (1) articles published by major journals and books are recorded into a database. Researchers will be able to find articles by topics, journals, publication dates, authors and keywords e.t.c by a few clicks; (2) to fast track the latest developments, researchers can also follow upon updates in the current month, last 90days, 180 days and 365 days by clicking on corresponding links; (3) the modern communication tools such as Facebook, Twitter and their Chinese counterparts are accommodated in this site to share

  7. Chemistry research for the Canadian nuclear fuel waste management program

    International Nuclear Information System (INIS)

    Vikis, A.C.; Garisto, F.; Lemire, R.J.; Paquette, J.; Sagert, N.H.; Saluja, P.P.S.; Sunder, S.; Taylor, P.

    1988-01-01

    This publication reviews chemical research in support of the Canadian Nuclear Fuel Waste Management Program. The overall objective of this research is to develop the fundamental understanding required to demonstrate the suitability of waste immobilization media and processes, and to develop the chemical information required to predict the long-term behaviour of radionuclides in the geosphere after the waste form and the various engineered barriers containing it have failed. Key studies towards the above objective include experimental and theoretical studies of uranium dioxide oxidation/dissolution; compilation of thermodynamic databases and an experimental program to determine unavailable thermodynamic data; studies of hydrothermal alteration of minerals and radionuclide interactions with such minerals; and a study examining actinide colloid formation, as well as sorption of actinides on groundwater colloids

  8. A database for compliance with land disposal restrictions

    International Nuclear Information System (INIS)

    McCoy, M.W.

    1990-09-01

    The new restrictions on land disposal introduce additional challenges to hazardous waste managers. Laboratory waste streams consisting of small volumes of diverse waste types will be particularly difficult to manage due to the large number of possible treatment standards that could be applied. To help remedy this management problem, a user-friendly database has been developed to provide the regulatory information required for each of the hazardous wastes present in the wastes stream of a large research laboratory. 3 figs., 1 tab

  9. Radioactive waste management profiles. A compilation of data from the Net Enabled Waste Management Database (NEWMDB). No. 5

    International Nuclear Information System (INIS)

    2003-05-01

    The document consists of two parts: Overview and Country Waste Profile Reports for Reporting Year 2000. The first section contains overview reports that provide assessments of the achievements and shortcomings of the Net Enabled Waste Management Database (NEWMDB) during the first two data collection cycles (July 2001 to March 2002 and July 2002 to February 2003). The second part of the report includes a summary and compilation of waste management data submitted by Agency Member States in both the first and second data collection cycles

  10. The Human Communication Research Centre dialogue database.

    Science.gov (United States)

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  11. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  12. 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012)

    CERN Document Server

    Härder, Theo; Wrembel, Robert; Advances in Databases and Information Systems

    2013-01-01

    This volume is the second one of the 16th East-European Conference on Advances in Databases and Information Systems (ADBIS 2012), held on September 18-21, 2012, in Poznań, Poland. The first one has been published in the LNCS series.   This volume includes 27 research contributions, selected out of 90. The contributions cover a wide spectrum of topics in the database and information systems field, including: database foundation and theory, data modeling and database design, business process modeling, query optimization in relational and object databases, materialized view selection algorithms, index data structures, distributed systems, system and data integration, semi-structured data and databases, semantic data management, information retrieval, data mining techniques, data stream processing, trust and reputation in the Internet, and social networks. Thus, the content of this volume covers the research areas from fundamentals of databases, through still hot topic research problems (e.g., data mining, XML ...

  13. The web-enabled database of JRC-EC: a useful tool for managing european gen 4 materials data

    International Nuclear Information System (INIS)

    Over, H.H.; Dietz, W.

    2008-01-01

    Materials and document databases are important tools to conserve knowledge and experimental materials data of European R and D projects. A web-enabled application guarantees a fast access to these data. In combination with analysis tools the experimental data are used for e.g. mechanical design, construction and lifetime predictions of complex components. The effective and efficient handling of large amounts of generic and detailed materials data with regard to properties related to e.g. fabrication processes, joining techniques, irradiation or aging is one of the basic elements of data management within ongoing nuclear safety and design related European research projects and networks. The paper describes the structure and functionality of Mat-DB and gives examples how these tools can be used for the management and evaluation of materials data for EURATOM FP7 Generation IV reactor types. (authors)

  14. A Systematic Review of Coding Systems Used in Pharmacoepidemiology and Database Research.

    Science.gov (United States)

    Chen, Yong; Zivkovic, Marko; Wang, Tongtong; Su, Su; Lee, Jianyi; Bortnichak, Edward A

    2018-02-01

    Clinical coding systems have been developed to translate real-world healthcare information such as prescriptions, diagnoses and procedures into standardized codes appropriate for use in large healthcare datasets. Due to the lack of information on coding system characteristics and insufficient uniformity in coding practices, there is a growing need for better understanding of coding systems and their use in pharmacoepidemiology and observational real world data research. To determine: 1) the number of available coding systems and their characteristics, 2) which pharmacoepidemiology databases are they adopted in, 3) what outcomes and exposures can be identified from each coding system, and 4) how robust they are with respect to consistency and validity in pharmacoepidemiology and observational database studies. Electronic literature database and unpublished literature searches, as well as hand searching of relevant journals were conducted to identify eligible articles discussing characteristics and applications of coding systems in use and published in the English language between 1986 and 2016. Characteristics considered included type of information captured by codes, clinical setting(s) of use, adoption by a pharmacoepidemiology database, region, and available mappings. Applications articles describing the use and validity of specific codes, code lists, or algorithms were also included. Data extraction was performed independently by two reviewers and a narrative synthesis was performed. A total of 897 unique articles and 57 coding systems were identified, 17% of which included country-specific modifications or multiple versions. Procedures (55%), diagnoses (36%), drugs (38%), and site of disease (39%) were most commonly and directly captured by these coding systems. The systems were used to capture information from the following clinical settings: inpatient (63%), ambulatory (55%), emergency department (ED, 34%), and pharmacy (13%). More than half of all coding

  15. Content independence in multimedia databases

    NARCIS (Netherlands)

    A.P. de Vries (Arjen)

    2001-01-01

    textabstractA database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for

  16. Survey on utilization of database for research and development of global environmental industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsu no tame no database nado no riyo ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    To optimize networks and database systems for promotion of the industry technology development contributing to the solution of the global environmental problem, studies are made on reusable information resource and its utilization methods. As reusable information resource, there are external database and network system for researchers` information exchange and for computer use. The external database includes commercial database and academic database. As commercial database, 6 agents and 13 service systems are selected. As academic database, there are NACSIS-IR and the database which is connected with INTERNET in the U.S. These are used in connection with the UNIX academic research network called INTERNET. For connection with INTERNET, a commercial UNIX network service called IIJ which starts service in April 1993 can be used. However, personal computer communication network is used for the time being. 6 figs., 4 tabs.

  17. Research and development strategy and maintenance engineering to strengthen the basis of ageing management. International activities

    International Nuclear Information System (INIS)

    Sekimura, Naoto

    2009-01-01

    Systematic development of information basis for database and knowledge-base has been performed in addition to the development of codes and standards by academic societies, regulatory bodies and industries through the intensive domestic safety research collaborations and international collaboration, through the continuous revision of Strategy Maps for Ageing Management and Safe Long Term Operation. Important international activities in IAEA and OECD/NEA especially for knowledge-base and extraction of commendable practices in ageing management are discussed. (author)

  18. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  19. Database foundation for the configuration management of the CERN accelerator controls systems

    International Nuclear Information System (INIS)

    Zaharieva, Z.; Martin Marquez, M.; Peryt, M.

    2012-01-01

    The Controls Configuration Database (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Control System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Control System. The configuration items are quite heterogeneous, depicting different areas of the Control System - ranging from 3000 Front-End Computers, 75000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their inter-dependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering of change management processes as well as providing status accounting and audits. This required the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators. (authors)

  20. The Gulf of Mexico Research Initiative: Managing a Multidisciplinary Data Explosion

    Science.gov (United States)

    Howard, M. K.; Gibeaut, J. C.; Reed, D.

    2011-12-01

    collaborative relationship GRIIDC needed with the data originators. GRI founders only required that the AU provide a research database and data be shared openly with a minimum of delay and archived at national repositories. GRIIDC has three functions, to manage the data in cooperation with the originators, to facilitate public access to the data through the research database and to support management's need to track the project's progress. GRIIDC needed to rapidly assess the volume of data and specific parameters being collected, to quickly hire information technology and subject matter experts, to establish data policies for reporting, metadata content, submittal form and formats, and to establish and maintain mutually agreeable divisions of labor with the data originators. This presentation will describe the challenges and lessons learned by GRIIDC while building a collaborative end-to-end data management system designed to absorb, organize, share, and curate the massively heterogeneous data set produced by nearly $200M of GRI research.

  1. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  2. Interacting with the National Database for Autism Research (NDAR) via the LONI Pipeline workflow environment.

    Science.gov (United States)

    Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell

    2015-03-01

    Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.

  3. Practice databases and their uses in clinical research.

    Science.gov (United States)

    Tierney, W M; McDonald, C J

    1991-04-01

    A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.

  4. PeDaB - the personal dosimetry database at the research centre Juelich

    International Nuclear Information System (INIS)

    Geisse, C.; Hill, P.; Paschke, M.; Hille, R.; Schlaeger, M.

    1998-01-01

    In May, 1997 the mainframe based registration, processing and archiving of personal monitoring data at the research centre Juelich (FZJ) was transferred to a client server system. A complex database application was developed. The client user interface is a Windows based Microsoft ACCESS application which is connected to an ORACLE database via ODBC and TCP/IP. The conversion covered all areas of personal dosimetry including internal and external exposition as well as administrative areas. A higher degree of flexibility, data security and integrity was achieved. (orig.) [de

  5. Sports medicine clinical trial research publications in academic medical journals between 1996 and 2005: an audit of the PubMed MEDLINE database.

    Science.gov (United States)

    Nichols, A W

    2008-11-01

    To identify sports medicine-related clinical trial research articles in the PubMed MEDLINE database published between 1996 and 2005 and conduct a review and analysis of topics of research, experimental designs, journals of publication and the internationality of authorships. Sports medicine research is international in scope with improving study methodology and an evolution of topics. Structured review of articles identified in a search of a large electronic medical database. PubMed MEDLINE database. Sports medicine-related clinical research trials published between 1996 and 2005. Review and analysis of articles that meet inclusion criteria. Articles were examined for study topics, research methods, experimental subject characteristics, journal of publication, lead authors and journal countries of origin and language of publication. The search retrieved 414 articles, of which 379 (345 English language and 34 non-English language) met the inclusion criteria. The number of publications increased steadily during the study period. Randomised clinical trials were the most common study type and the "diagnosis, management and treatment of sports-related injuries and conditions" was the most popular study topic. The knee, ankle/foot and shoulder were the most frequent anatomical sites of study. Soccer players and runners were the favourite study subjects. The American Journal of Sports Medicine had the highest number of publications and shared the greatest international diversity of authorships with the British Journal of Sports Medicine. The USA, Australia, Germany and the UK produced a good number of the lead authorships. In all, 91% of articles and 88% of journals were published in English. Sports medicine-related research is internationally diverse, clinical trial publications are increasing and the sophistication of research design may be improving.

  6. Database for propagation models

    Science.gov (United States)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  7. Data-Based Decision Making at the Policy, Research, and Practice Levels

    NARCIS (Netherlands)

    Schildkamp, Kim; Ebbeler, J.

    2015-01-01

    Data-based decision making (DBDM) can lead to school improvement. However, schools struggle with the implementation of DBDM. In this symposium, we will discuss research and the implementation of DBDM at the national and regional policy level and the classroom level. We will discuss policy issues

  8. Research Directions in Database Security IV

    Science.gov (United States)

    1993-07-01

    second algorithm, which is based on multiversion timestamp ordering, is that high level transactions can be forced to read arbitrarily old data values...system. The first, the single ver- sion model, stores only the latest veision of each data item, while the second, the 88 multiversion model, stores... Multiversion Database Model In the standard database model, where there is only one version of each data item, all transactions compete for the most recent

  9. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  10. Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.

    Science.gov (United States)

    Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S

    2016-12-01

    Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    Directory of Open Access Journals (Sweden)

    Errol A. Blake

    2007-12-01

    Full Text Available Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions will focus on unifying the process of securing data or information whether it is in use, in storage or being transmitted. Promoting a change in Database Curriculum Development trends may also play a role in helping secure databases. This paper will take the approach that if one make a conscientious effort to unifying the Database Security process, which includes Database Management System (DBMS selection process, following regulatory compliances, analyzing and learning from the mistakes of others, Implementing Networking Security Technologies, and Securing the Database, may prevent database breach.

  12. Description of geological data in SKBs database GEOTAB

    International Nuclear Information System (INIS)

    Sehlstedt, S.; Stark, T.

    1991-01-01

    Since 1977 the Swedish Nuclear Fuel and Waste Management Co, SKB, has been performing a research and development programme for final disposal of spent nuclear fuel. The purpose of the programme is to acquire knowledge and data of radioactive waste. Measurement for the characterisation of geological, geophysical, hydrogeological and hydrochemical conditions are performed in specific site investigations as well as for geoscientific projects. Large data volumes have been produced since the start of the programme, both raw data and results. During the years these data were stored in various formats by the different institutions and companies that performed the investigations. It was therefore decided that all data from the research and development programme should be gathered in a database. The database, called GEOTAB, is a relational database. The database comprises six main groups of data volumes. These are: Background information, geological data, geophysical data, hydrological and meteorological data, hydrochemical data, and tracer tests. This report deals with geological data and described the dataflow from the measurements at the sites to the result tables in the database. The geological investigations have been divided into three categories, and each category is stored separately in the database. They are: Surface fractures, core mapping, and chemical analyses. (authors)

  13. Experience in radioactive waste management of research centre-CIAE

    International Nuclear Information System (INIS)

    Luo Shanggeng

    2001-01-01

    security and database. The spent fuel of research reactors are safely stored in the water pool and carefully control the corrosion damage. The emergency response organization and emergency preparedness was established. The environmental impact assessment report and safety assessment is required for new nuclear facility or project. Based on such strict management mentioned above the radioactive waste is safely controlled and continually minimized in CIAE. In response to well waste management R and D of waste management is conducted in CIAE. Lessons learned regarding waste management in CIAE are also mentioned in this paper. (author)

  14. Development of a computational database for application in Probabilistic Safety Analysis of nuclear research reactors; Desenvolvimento de uma base de dados computacional para aplicação em Análise Probabilística de Segurança de reatores nucleares de pesquisa

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, Vagner dos Santos

    2016-07-01

    The objective of this work is to present the computational database that was developed to store technical information and process data on component operation, failure and maintenance for the nuclear research reactors located at the Nuclear and Energy Research Institute (Instituto de Pesquisas Energéticas e Nucleares, IPEN), in São Paulo, Brazil. Data extracted from this database may be applied in the Probabilistic Safety Analysis of these research reactors or in less complex quantitative assessments related to safety, reliability, availability and maintainability of these facilities. This database may be accessed by users of the corporate network, named IPEN intranet. Professionals who require the access to the database must be duly registered by the system administrator, so that they will be able to consult and handle the information. The logical model adopted to represent the database structure is an entity-relationship model, which is in accordance with the protocols installed in IPEN intranet. The open-source relational database management system called MySQL, which is based on the Structured Query Language (SQL), was used in the development of this work. The PHP programming language was adopted to allow users to handle the database. Finally, the main result of this work was the creation a web application for the component reliability database named PSADB, specifically developed for the research reactors of IPEN; furthermore, the database management system provides relevant information efficiently. (author)

  15. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  16. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  17. Time management strategies for research productivity.

    Science.gov (United States)

    Chase, Jo-Ana D; Topp, Robert; Smith, Carol E; Cohen, Marlene Z; Fahrenwald, Nancy; Zerwic, Julie J; Benefield, Lazelle E; Anderson, Cindy M; Conn, Vicki S

    2013-02-01

    Researchers function in a complex environment and carry multiple role responsibilities. This environment is prone to various distractions that can derail productivity and decrease efficiency. Effective time management allows researchers to maintain focus on their work, contributing to research productivity. Thus, improving time management skills is essential to developing and sustaining a successful program of research. This article presents time management strategies addressing behaviors surrounding time assessment, planning, and monitoring. Herein, the Western Journal of Nursing Research editorial board recommends strategies to enhance time management, including setting realistic goals, prioritizing, and optimizing planning. Involving a team, problem-solving barriers, and early management of potential distractions can facilitate maintaining focus on a research program. Continually evaluating the effectiveness of time management strategies allows researchers to identify areas of improvement and recognize progress.

  18. Research in Mobile Database Query Optimization and Processing

    Directory of Open Access Journals (Sweden)

    Agustinus Borgy Waluyo

    2005-01-01

    Full Text Available The emergence of mobile computing provides the ability to access information at any time and place. However, as mobile computing environments have inherent factors like power, storage, asymmetric communication cost, and bandwidth limitations, efficient query processing and minimum query response time are definitely of great interest. This survey groups a variety of query optimization and processing mechanisms in mobile databases into two main categories, namely: (i query processing strategy, and (ii caching management strategy. Query processing includes both pull and push operations (broadcast mechanisms. We further classify push operation into on-demand broadcast and periodic broadcast. Push operation (on-demand broadcast relates to designing techniques that enable the server to accommodate multiple requests so that the request can be processed efficiently. Push operation (periodic broadcast corresponds to data dissemination strategies. In this scheme, several techniques to improve the query performance by broadcasting data to a population of mobile users are described. A caching management strategy defines a number of methods for maintaining cached data items in clients' local storage. This strategy considers critical caching issues such as caching granularity, caching coherence strategy and caching replacement policy. Finally, this survey concludes with several open issues relating to mobile query optimization and processing strategy.

  19. De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users

    Science.gov (United States)

    Allaz, J. M.

    2012-12-01

    Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed

  20. Development of intelligent database program for PSI/ISI data management of nuclear power plant

    International Nuclear Information System (INIS)

    Um, Byong Guk; Park, Un Su; Park, Ik Keun; Park, Yun Won; Kang, Suk Chul

    1998-01-01

    An intelligent database program has been developed under fully compatible with windows 95 for the construction of total support system and the effective management of Pre-/In-Service Inspection data. Using the database program, it can be executed the analysis and multi-dimensional evaluation of the defects detected during PSI/ISI in the pipe and the pressure vessel of the nuclear power plants. And also it can be used to investigate the NDE data inspected repetitively and the contents of treatment, and to offer the fundamental data for application of evaluation data related to Fracture Mechanics Analysis(FMA). Furthermore, the PSI/ISI database loads and material properties can be utilized to secure the higher degree of safety, integrity, reliability, and life-prediction of components and systems in nuclear power plant.

  1. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    International Nuclear Information System (INIS)

    Shao, Weber; Kupelian, Patrick A; Wang, Jason; Low, Daniel A; Ruan, Dan

    2014-01-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  2. A database paradigm for the management of DICOM-RT structure sets using a geographic information system

    Science.gov (United States)

    Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan

    2014-03-01

    We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.

  3. River Basin Information System: Open Environmental Data Management for Research and Decision Making

    Directory of Open Access Journals (Sweden)

    Franziska Zander

    2016-07-01

    Full Text Available An open, standardized data management and related service infrastructure is a crucial requirement for a seamless storage and exchange of data and information within research projects, for the dissemination of project results and for their application in decision making processes. However, typical project databases often refer to only one research project and are limited to specific purposes. Once implemented, those systems are often not further maintained and updated, rendering the stored information useless once the system stops operating. The River Basin Information System (RBIS presented here is designed to fit not only the requirements of one research project, but focuses on generic functions, extensibility and standards compliance typically found in interdisciplinary environmental research. Developed throughout more than 10 years of research cooperation worldwide, RBIS is designed to manage different types of environmental data with and without spatial context together with a rich set of metadata. Beside data management and storage, RBIS provides functions for the visualization, linking, analysis and processing of different types of data to support research, decision making, result dissemination and information discovery for all kinds of users. The focus of this paper is on the description of the technical implementation and the presentation of functions. This will be complemented by an overview of example applications and experiences during RBIS development and operation.

  4. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  5. Research of Manufacture Time Management System Based on PLM

    Science.gov (United States)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  6. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  7. Understanding the patient perspective on research access to national health records databases for conduct of randomized registry trials.

    Science.gov (United States)

    Avram, Robert; Marquis-Gravel, Guillaume; Simard, François; Pacheco, Christine; Couture, Étienne; Tremblay-Gravel, Maxime; Desplantie, Olivier; Malhamé, Isabelle; Bibas, Lior; Mansour, Samer; Parent, Marie-Claude; Farand, Paul; Harvey, Luc; Lessard, Marie-Gabrielle; Ly, Hung; Liu, Geoffrey; Hay, Annette E; Marc Jolicoeur, E

    2018-07-01

    Use of health administrative databases is proposed for screening and monitoring of participants in randomized registry trials. However, access to these databases raises privacy concerns. We assessed patient's preferences regarding use of personal information to link their research records with national health databases, as part of a hypothetical randomized registry trial. Cardiology patients were invited to complete an anonymous self-reported survey that ascertained preferences related to the concept of accessing government health databases for research, the type of personal identifiers to be shared and the type of follow-up preferred as participants in a hypothetical trial. A total of 590 responders completed the survey (90% response rate), the majority of which were Caucasians (90.4%), male (70.0%) with a median age of 65years (interquartile range, 8). The majority responders (80.3%) would grant researchers access to health administrative databases for screening and follow-up. To this end, responders endorsed the recording of their personal identifiers by researchers for future record linkage, including their name (90%), and health insurance number (83.9%), but fewer responders agreed with the recording of their social security number (61.4%, pgranting researchers access to the administrative databases (OR: 1.69, 95% confidence interval: 1.03-2.90; p=0.04). The majority of Cardiology patients surveyed were supportive of use of their personal identifiers to access administrative health databases and conduct long-term monitoring in the context of a randomized registry trial. Copyright © 2018 Elsevier Ireland Ltd. All rights reserved.

  8. Down syndrome: national conference on patient registries, research databases, and biobanks.

    Science.gov (United States)

    Oster-Granite, Mary Lou; Parisi, Melissa A; Abbeduto, Leonard; Berlin, Dorit S; Bodine, Cathy; Bynum, Dana; Capone, George; Collier, Elaine; Hall, Dan; Kaeser, Lisa; Kaufmann, Petra; Krischer, Jeffrey; Livingston, Michelle; McCabe, Linda L; Pace, Jill; Pfenninger, Karl; Rasmussen, Sonja A; Reeves, Roger H; Rubinstein, Yaffa; Sherman, Stephanie; Terry, Sharon F; Whitten, Michelle Sie; Williams, Stephen; McCabe, Edward R B; Maddox, Yvonne T

    2011-01-01

    A December 2010 meeting, "Down Syndrome: National Conference on Patient Registries, Research Databases, and Biobanks," was jointly sponsored by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) at the National Institutes of Health (NIH) in Bethesda, MD, and the Global Down Syndrome Foundation (GDSF)/Linda Crnic Institute for Down Syndrome based in Denver, CO. Approximately 70 attendees and organizers from various advocacy groups, federal agencies (Centers for Disease Control and Prevention, and various NIH Institutes, Centers, and Offices), members of industry, clinicians, and researchers from various academic institutions were greeted by Drs. Yvonne Maddox, Deputy Director of NICHD, and Edward McCabe, Executive Director of the Linda Crnic Institute for Down Syndrome. They charged the participants to focus on the separate issues of contact registries, research databases, and biobanks through both podium presentations and breakout session discussions. Among the breakout groups for each of the major sessions, participants were asked to generate responses to questions posed by the organizers concerning these three research resources as they related to Down syndrome and then to report back to the group at large with a summary of their discussions. This report represents a synthesis of the discussions and suggested approaches formulated by the group as a whole. Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  9. The FP4026 Research Database on the fundamental period of RC infilled frame structures.

    Science.gov (United States)

    Asteris, Panagiotis G

    2016-12-01

    The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames) based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  10. The FP4026 Research Database on the fundamental period of RC infilled frame structures

    Directory of Open Access Journals (Sweden)

    Panagiotis G. Asteris

    2016-12-01

    Full Text Available The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.

  11. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  12. The Ark: a customizable web-based data management tool for health and medical research.

    Science.gov (United States)

    Bickerstaffe, Adrian; Ranaweera, Thilina; Endersby, Travis; Ellis, Christopher; Maddumarachchi, Sanjaya; Gooden, George E; White, Paul; Moses, Eric K; Hewitt, Alex W; Hopper, John L

    2017-02-15

    The Ark is an open-source web-based tool that allows researchers to manage health and medical research data for humans and animals without specialized database skills or programming expertise. The system provides data management for core research information including demographic, phenotype, biospecimen and pedigree data, in addition to supporting typical investigator requirements such as tracking participant consent and correspondence, whilst also being able to generate custom data exports and reports. The Ark is 'study generic' by design and highly configurable via its web interface, allowing researchers to tailor the system to the specific data management requirements of their study. Source code for The Ark can be obtained freely from the website https://github.com/The-Ark-Informatics/ark/ . The source code can be modified and redistributed under the terms of the GNU GPL v3 license. Documentation and a pre-configured virtual appliance can be found at the website http://sphinx.org.au/the-ark/ . adrianb@unimelb.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. On the advancement of highly cited research in China: An analysis of the Highly Cited database.

    Science.gov (United States)

    Li, John Tianci

    2018-01-01

    This study investigates the progress of highly cited research in China from 2001 to 2016 through the analysis of the Highly Cited database. The Highly Cited database, compiled by Clarivate Analytics, is comprised of the world's most influential researchers in the 22 Essential Science Indicator fields as catalogued by the Web of Science. The database is considered an international standard for the measurement of national and institutional highly cited research output. Overall, we found a consistent and substantial increase in Highly Cited Researchers from China during the timespan. The Chinese institutions with the most Highly Cited Researchers- the Chinese Academy of Sciences, Tsinghua University, Peking University, Zhejiang University, the University of Science and Technology of China, and BGI Shenzhen- are all top ten universities or primary government research institutions. Further evaluation of separate fields of research and government funding data from the National Natural Science Foundation of China revealed disproportionate growth efficiencies among the separate divisions of the National Natural Science Foundation. The most development occurred in the fields of Chemistry, Materials Sciences, and Engineering, whereas the least development occurred in Economics and Business, Health Sciences, and Life Sciences.

  14. Operational Research Techniques Used for Addressing Biodiversity Objectives into Forest Management: An Overview

    Directory of Open Access Journals (Sweden)

    Marta Ezquerro

    2016-10-01

    Full Text Available The integration of biodiversity into forest management has traditionally been a challenge for many researchers and practitioners. In this paper, we have provided a survey of forest management papers that use different Operations Research (OR methods in order to integrate biodiversity objectives into their planning models. One hundred and seventy-nine references appearing in the ISI Web of Science database in the last 30 years have been categorized and evaluated according to different attributes like model components, forest management elements, or biodiversity issues. The results show that many OR methods have been applied to deal with this challenging objective. Thus, up to 18 OR techniques, divided into four large groups, which have been employed in four or more articles, have been identified. However, it has been observed how the evolution of these papers in time apparently tended to increase only until 2008. Finally, two clear trends in this set of papers should be highlighted: the incorporation of spatial analysis tools into these operational research models and, second, the setting up of hybrid models, which combine different techniques to solve this type of problem.

  15. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  16. Kajian Unified Theory of Acceptance and Use of Technology Dalam Penggunaan Open Source Software Database Management System

    Directory of Open Access Journals (Sweden)

    Michael Sonny

    2016-06-01

    Full Text Available Perkembangan perangkat lunak computer dewasa ini terjadi sedemikian pesatnya, perkembangan tidak hanya terjadi pada perangkat lunak yang memiliki lisensi tertentu, perangkat open source pun demikian. Perkembangan itu tentu saja sangat menggembirakan bagi pengguna computer khususnya di kalangan pendidikan maupun di kalangan mahasiswa, karena pengguna mempunyai beberapa pilihan untuk menggunakan aplikasi. Perangkat lunak open source juga menawarkan produk yang umumnya gratis, diberikan kode programnya, kebebasan untuk modifikasi dan mengembangkan. Meneliti aplikasi berbasis open source tentu saja sangat beragam seperti aplikasi untuk pemrograman (PHP, Gambas, Database Management System (MySql, SQLite, browsing (Mozilla, Firefox, Opera. Pada penelitian ini di kaji penerimaan aplikasi DBMS (Database Management System seperti MySql dan SQLite dengan menggunakan sebuah model yang dikembangkan oleh Venkantes(2003 yaitu UTAUT (Unified Theory of Acceptance and Use of Technology. Faktor – faktor tertentu juga mempengaruhi dalam melakukan kegiatan pembelajaran aplikasi open source ini, salah satu faktor atau yang disebut dengan moderating yang bisa mempengaruhi efektifitas dan efisiensi. Dengan demikian akan mendapatkan hasil yang bisa membuat kelancaran dalam pembelajaran aplikasi berbasis open source ini.   Kata kunci— open source, Database Management System (DBMS, Modereting

  17. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  18. The database system for the management of technical documentations of PWR fuel design project using CD-ROM

    International Nuclear Information System (INIS)

    Park, Bong Sik; Lee, Won Jae; Ryu, Jae Kwon; Jo, In Hang; Chang, Jong Hwa.

    1996-12-01

    In this report, the database system developed for the management of technical documentation of PWR fuel design project using CD-ROM (compact disk - read only memory) is described. The database system, KIRDOCM (KAERI Initial and Reload Fuel project technical documentation management), is developed and installed on PC using Visual Foxpro 3.0. Descriptions are focused on the user interface of the KIRDOCM. Introduction addresses the background and concept of the development. The main chapter describes the user requirements, the analysis of computing environment, the design of KIRDOCM, the implementation of the KIRDOCM, user's manual of KIRDOCM and the maintenance of the KIRDOCM for future improvement. The implementation of KIRDOCM system provides the efficiency in the management, maintenance and indexing of the technical documents. And, it is expected that KIRDOCM may be a good reference in applying Visual Foxpro for the development of information management system. (author). 2 tabs., 13 figs., 8 refs

  19. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  20. Environmental Sustainability and Energy-Efficient Supply Chain Management: A Review of Research Trends and Proposed Guidelines

    Directory of Open Access Journals (Sweden)

    Piera Centobelli

    2018-01-01

    Full Text Available This paper conducts a structured review on the topic of energy efficiency and environmental sustainability in the supply chain management context to define research trends on the topic and identify research gaps. The review is carried out using the largest databases of peer-reviewed literature (Scopus and Web of Science. A sample of 122 papers focusing on the topic of energy-efficient and sustainable supply chain management was selected and analyzed through descriptive and content analysis. The review highlights that despite there is a growing research trend on the topic, different research gaps remain to be covered. These gaps concern the factors influencing energy efficiency and environmental sustainability initiatives, the classification of energy efficiency and environmental sustainability initiatives, the impact of energy efficiency and environmental sustainability on supply chain performance, the customer perspective in sustainable and energy-efficient supply chain, and the different technologies supporting the energy efficiency and environmental sustainability initiatives. The research gaps and the research questions identified offer the opportunity to identify areas of investigation to design future research directions and propose guidelines in the field of supply chain management.

  1. Creating databases for biological information: an introduction.

    Science.gov (United States)

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  2. Management of the database originated from individual and environment monitoring carried out in the UNIFESP/HSP complex, SP, Brazil

    International Nuclear Information System (INIS)

    Medeiros, Regina Bitelli; Daros, Kellen Adriana Curci; Almeida, Natalia Correia de; Pires, Silvio Ricardo; Jorge, Luiz Tadeu

    2005-01-01

    The Radiological Protection Sector of the Sao Paulo Hospital/Federal University of Sao Paulo, SP, Brazil manages the records of 457 dosemeters. Once the users must know about the absorbed doses monthly and the need of keep the individuals records until the age of 75 years old and for, at least during 30 years after the end of the occupation of the individual, it became necessary to construct a database and a computerized control to manage the accumulated doses. This control, between 1991 and 1999, was effected by means of a relational database (Cobol 85 - Operating System GCOS 7 (ABC Telematic Bull)). After this period, when the company responsible for dosimetry went on to provide computerized results, the data were stored in a Paradox database (Borland). In 2004, the databases were integrated and were created a third database developed in Oracle (IBM) and a system that allowed the institutional Intranet users to consult their accumulated doses annually and the value of the total effective dose accumulated during working life

  3. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  4. FY 1998 survey report. Examinational research on the construction of body function database; 1998 nendo chosa hokokusho. Shintai kino database no kochiku ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The body function database is aimed at supplying and supporting products and environment friendly to aged people by supplying the data on body function of aged people in case of planning, designing and production when companies supply the products and environment. As a method for survey, group measuring was made for measurement of visual characteristics. For the measurement of action characteristics, the moving action including posture change was studied, the experimental plan was carried out, and items of group measurement and measuring methods were finally proposed. The database structure was made public at the end of this fiscal year, through the pre-publication/evaluation after the trial evaluation conducted using pilot database. In the study of the measurement of action characteristics, the verification test was conducted for a small-size group. By this, the measurement of action characteristics was finally proposed. In the body function database system, subjects on operation were extracted/bettered by trially evaluating pilot database, and also adjustment of right relations toward publication and preparation of management methods were made. An evaluation version was made supposing its publication. (NEDO)

  5. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  6. A dedicated database system for handling multi-level data in systems biology.

    Science.gov (United States)

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  7. Management of research and development project

    International Nuclear Information System (INIS)

    Go, Seok Hwa; Hong Jeong Yu; Hyun, Byeong Hwan

    2010-12-01

    This book introduces summary on management of research and development project, prepare of research and development with investigation and analysis of paper, patent and trend of technology, structure of project, management model, management of project, management of project range, management of project time, management of project cost, management of project goods, management of project manpower, management of communication, management of project risk, management of project supply, management of outcome of R and D, management of apply and enroll of patent and management of technology transfer.

  8. Chess databases as a research vehicle in psychology: Modeling large data.

    Science.gov (United States)

    Vaci, Nemanja; Bilalić, Merim

    2017-08-01

    The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.

  9. The Astrobiology Habitable Environments Database (AHED)

    Science.gov (United States)

    Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.

    2015-12-01

    The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.

  10. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  11. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    Science.gov (United States)

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  12. The web-enabled database of JRC-EC, a useful tool for managing European Gen IV materials data

    International Nuclear Information System (INIS)

    Over, H.H.; Dietz, W.

    2008-01-01

    Materials and document databases are important tools to conserve knowledge and experimental materials data of European R and D projects. A web-enabled application guarantees a fast access to these data. In combination with analysis tools the experimental data are used for e.g. mechanical design, construction and lifetime predictions of complex components. The effective and efficient handling of large amounts of generic and detailed materials data with regard to properties related to e.g. fabrication processes, joining techniques, irradiation or aging is one of the basic elements of data management within ongoing nuclear safety and design related European research projects and networks. The paper describes the structure and functionality of Mat-DB and gives examples how these tools can be used for the management and evaluation of materials data of European (national or multi-national) R and D activities or future reactor types such as the EURATOM FP7 Generation IV reactor types or the heavy liquid metals cooled reactor

  13. Translation from the collaborative OSM database to cartography

    Science.gov (United States)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  14. Database and Registry Research in Orthopaedic Surgery: Part I: Claims-Based Data.

    Science.gov (United States)

    Pugely, Andrew J; Martin, Christopher T; Harwood, Jared; Ong, Kevin L; Bozic, Kevin J; Callaghan, John J

    2015-08-05

    The use of large-scale national databases for observational research in orthopaedic surgery has grown substantially in the last decade, and the data sets can be grossly categorized as either administrative claims or clinical registries. Administrative claims data comprise the billing records associated with the delivery of health-care services. Orthopaedic researchers have used both government and private claims to describe temporal trends, geographic variation, disparities, complications, outcomes, and resource utilization associated with both musculoskeletal disease and treatment. Medicare claims comprise one of the most robust data sets used to perform orthopaedic research, with >45 million beneficiaries. The U.S. government, through the Centers for Medicare & Medicaid Services, often uses these data to drive changes in health policy. Private claims data used in orthopaedic research often comprise more heterogeneous patient demographic samples, but allow longitudinal analysis similar to that offered by Medicare claims. Discharge databases, such as the U.S. National Inpatient Sample, provide a wide national sampling of inpatient hospital stays from all payers and allow analysis of associated adverse events and resource utilization. Administrative claims data benefit from the high patient numbers obtained through a majority of hospitals. Using claims, it is possible to follow patients longitudinally throughout encounters irrespective of the location of the institution delivering health care. Some disadvantages include lack of precision of ICD-9 (International Classification of Diseases, Ninth Revision) coding schemes. Much of these data are expensive to purchase, complicated to organize, and labor-intensive to manipulate--often requiring trained specialists for analysis. Given the changing health-care environment, it is likely that databases will provide valuable information that has the potential to influence clinical practice improvement and health policy for

  15. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  16. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  17. Development of a functional, internet-accessible department of surgery outcomes database.

    Science.gov (United States)

    Newcomb, William L; Lincourt, Amy E; Gersin, Keith; Kercher, Kent; Iannitti, David; Kuwada, Tim; Lyons, Cynthia; Sing, Ronald F; Hadzikadic, Mirsad; Heniford, B Todd; Rucho, Susan

    2008-06-01

    The need for surgical outcomes data is increasing due to pressure from insurance companies, patients, and the need for surgeons to keep their own "report card". Current data management systems are limited by inability to stratify outcomes based on patients, surgeons, and differences in surgical technique. Surgeons along with research and informatics personnel from an academic, hospital-based Department of Surgery and a state university's Department of Information Technology formed a partnership to develop a dynamic, internet-based, clinical data warehouse. A five-component model was used: data dictionary development, web application creation, participating center education and management, statistics applications, and data interpretation. A data dictionary was developed from a list of data elements to address needs of research, quality assurance, industry, and centers of excellence. A user-friendly web interface was developed with menu-driven check boxes, multiple electronic data entry points, direct downloads from hospital billing information, and web-based patient portals. Data were collected on a Health Insurance Portability and Accountability Act-compliant server with a secure firewall. Protected health information was de-identified. Data management strategies included automated auditing, on-site training, a trouble-shooting hotline, and Institutional Review Board oversight. Real-time, daily, monthly, and quarterly data reports were generated. Fifty-eight publications and 109 abstracts have been generated from the database during its development and implementation. Seven national academic departments now use the database to track patient outcomes. The development of a robust surgical outcomes database requires a combination of clinical, informatics, and research expertise. Benefits of surgeon involvement in outcomes research include: tracking individual performance, patient safety, surgical research, legal defense, and the ability to provide accurate information

  18. Draft secure medical database standard.

    Science.gov (United States)

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  19. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  20. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  1. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  2. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  3. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  4. Waste management research abstracts. Information on radioactive waste management research in progress or planned. Vol. 28

    International Nuclear Information System (INIS)

    2003-11-01

    This issue contains 184 abstracts that describe research in progress in the field of radioactive waste management. The research abstracts contained in the Waste Management Research Abstracts Volume 28 (WMRA 28) were collected between October 1, 2002 and September 30, 2003. The abstracts reflect research in progress, or planned, in the field of radioactive waste management. They present ongoing work in various countries and international organizations. Although the abstracts are indexed by country, some programmes are actually the result of cooperation among several countries. Indeed, a primary reason for providing this compilation of programmes, institutions and scientists engaged in research into radioactive waste management is to increase international co-operation and facilitate communications

  5. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  6. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  7. Data Management: New Tools, New Organization, and New Skills in a French Research Institute

    Directory of Open Access Journals (Sweden)

    Caroline Martin

    2017-04-01

    Full Text Available In the context of E-science and open access, visibility and impact of scientific results and data have become important aspects for spreading information to users and to the society in general. The objective of this general trend of the economy is to feed the innovation process and create economic value. In our institute, the French National Research Institute of Science and Technology for Environment and Agriculture, Irstea, the department in charge of scientific and technical information, with the help of other professionals (Scientists, IT professionals, ethics advisors…, has recently developed suitable services for the researchers and for their needs concerning the data management in order to answer European recommendations for open data. This situation has demanded to review the different workflows between databases, to question the organizational aspects between skills, occupations, and departments in the institute. In fact, the data management involves all professionals and researchers to asset their working ways together.

  8. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  9. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  10. Knowledge Exchange and Management Research

    DEFF Research Database (Denmark)

    Bager, Torben

    2018-01-01

    for ‘interesting’ discoveries has a potential to lift off papers with a high level of scientific rigor as well as a high level of relevance for practice. Originality: An outcome focus on the relationship between knowledge exchange activities and management research is to our knowledge new in the debate about......Purpose: The growing involvement of management researchers in knowledge exchange activities and collaborative research does not seem to be reflected in a growing academic output. The purpose of this paper is to explore barriers for academic output from these activities as well as the potential...... derived from knowledge exchange activities and Mode 2 research into academic papers such as low priority of case study research in leading management journals, a growing practice orientation in the research funding systems, methodological challenges due to limited researcher control, and disincentives...

  11. An object-oriented framework for managing cooperating legacy databases

    NARCIS (Netherlands)

    Balsters, H; de Brock, EO

    2003-01-01

    We describe a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. Our approach to database federation is based on the UML/OCL data

  12. The use of database management systems and artificial intelligence in automating the planning of optical navigation pictures

    Science.gov (United States)

    Davis, Robert P.; Underwood, Ian M.

    1987-01-01

    The use of database management systems (DBMS) and AI to minimize human involvement in the planning of optical navigation pictures for interplanetary space probes is discussed, with application to the Galileo mission. Parameters characterizing the desirability of candidate pictures, and the program generating them, are described. How these parameters automatically build picture records in a database, and the definition of the database structure, are then discussed. The various rules, priorities, and constraints used in selecting pictures are also described. An example is provided of an expert system, written in Prolog, for automatically performing the selection process.

  13. Database security - how can developers and DBAs do it together and what can other Service Managers learn from it

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of security threats affecting databases, preventive measures that we are taking at CERN and best practices in the industry. The presentation will describe how generic the threats are and how can other service managers profit from the database experience to protect other systems.

  14. Developing Plugin e-DDC as an Additional Application for Senayan Library Management System with PHP Language Programming and MySQL Database

    Directory of Open Access Journals (Sweden)

    Mohamad Rotmianto

    2018-01-01

    Full Text Available Between Senayan Library Management System or usually called SLiMS and e-DDC (electronic Dewey Decimal Classification now is the most popular library application software which has a lot of user, because it is simple to use, has an updating guarantee from its developers and off course both of them are free of charge. Although SLiMS and e-DDC are different application at all, as practically they are recommended to be used togather for library management. SLiMS is used for library automation and e-DDC is to find collection’s classification. Many users of SLiMS and e-DDC ever give suggestions about developing SLiMS with e-DDC include in its database, and then librarians will be easier to manage their libraries. Because of that suggestion, finally a plugin as an additional application for SLiMS has been created and developed. That plugin was build with PHP language programming and MySQL database. The purpose of this paper is to enrich about reference of development of library application, especially those based on Free Open Source Software (FOSS. This paper use Research and Development Methods. And the result of this paper is Plugin e-DDC for SLiMS which has released on May, 2nd 2015, in order to celebrate “National Education Day”.

  15. Research in Hospitality Management

    African Journals Online (AJOL)

    Research in Hospitality Management (RHM) is a peer-reviewed journal ... to the quintessential managerial areas of Finance, Human Resources, Operations, ... competency and career development of hospitality management students · EMAIL ...

  16. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  17. REALIZING BUSINESS PROCESS MANAGEMENT BY HELP OF A PROCESS MAPPING DATABASE TOOL

    CERN Document Server

    Vergili, Ceren

    2016-01-01

    In a typical business sector, processes are the building blocks of the achievement. A considerable percentage of the processes are consisting of business processes. This fact is bringing the fact that business sectors are in need of a management discipline. Business Process Management (BPM) is a discipline that combines modelling, automation, execution, control, measurement, and optimization of process by considering enterprise goals, spanning systems, employees, customers, and partners. CERN’s EN – HE – HM section desires to apply the BPM discipline appropriately for improving their necessary technical, administrative and managerial actions to supply appropriate CERN industrial transport, handling and lifting equipment and to maintain it. For this reason, a Process Mapping Database Tool is created to develop a common understanding about how the section members can visualize their processes, agree on quality standards and on how to improve. It provides a management support by establishing Process Charts...

  18. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  19. Research on Design Information Management System for Leather Goods

    Science.gov (United States)

    Lu, Lei; Peng, Wen-li

    The idea of setting up a design information management system of leather goods was put forward to solve the problems existed in current information management of leather goods. Working principles of the design information management system for leather goods were analyzed in detail. Firstly, the acquiring approach of design information of leather goods was introduced. Secondly, the processing methods of design information were introduced. Thirdly, the management of design information in database was studied. Finally, the application of the system was discussed by taking the shoes products as an example.

  20. Fiscal 1998 research report. Construction model project of the human sensory database; 1998 nendo ningen kankaku database kochiku model jigyo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This report summarizes the fiscal 1998 research result on construction of the human sensory database. The human sensory database for evaluating working environment was constructed on the basis of the measurement result on human sensory data (stress and fatigue) of 400 examinees at fields (transport field, control room and office) and in a laboratory. By using the newly developed standard measurement protocol for evaluating summer clothing (shirt, slacks and underwear), the database composed of the evaluation experiment results and the comparative experiment results on human physiological and sensory data of aged and young people was constructed. The database is featured by easy retrieval of various information concerned corresponding to requirements of tasks and use purposes. For evaluating the mass data with large time variation read corresponding to use purposes for every scene, the data detection support technique was adopted paying attention to physical and psychological variable phases, and mind and body events. A meaning of reaction and a hint for necessary measures are showed for every phase and event. (NEDO)