WorldWideScience

Sample records for clinical database relating

  1. Danish clinical databases: An overview

    DEFF Research Database (Denmark)

    Green, Anders

    2011-01-01

    Clinical databases contain data related to diagnostic procedures, treatments and outcomes. In 2001, a scheme was introduced for the approval, supervision and support to clinical databases in Denmark.......Clinical databases contain data related to diagnostic procedures, treatments and outcomes. In 2001, a scheme was introduced for the approval, supervision and support to clinical databases in Denmark....

  2. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. National Database for Clinical Trials Related to Mental Illness (NDCT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Database for Clinical Trials Related to Mental Illness (NDCT) is an extensible informatics platform for relevant data at all levels of biological and...

  4. Clinical Databases for Chest Physicians.

    Science.gov (United States)

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  5. The relational clinical database: a possible solution to the star wars in registry systems.

    Science.gov (United States)

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  6. [The future of clinical laboratory database management system].

    Science.gov (United States)

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  7. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  8. Accessing the public MIMIC-II intensive care relational database for clinical research.

    Science.gov (United States)

    Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G

    2013-01-10

    The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.

  9. Development of a relational database to capture and merge clinical history with the quantitative results of radionuclide renography.

    Science.gov (United States)

    Folks, Russell D; Savir-Baruch, Bital; Garcia, Ernest V; Verdes, Liudmila; Taylor, Andrew T

    2012-12-01

    Our objective was to design and implement a clinical history database capable of linking to our database of quantitative results from (99m)Tc-mercaptoacetyltriglycine (MAG3) renal scans and export a data summary for physicians or our software decision support system. For database development, we used a commercial program. Additional software was developed in Interactive Data Language. MAG3 studies were processed using an in-house enhancement of a commercial program. The relational database has 3 parts: a list of all renal scans (the RENAL database), a set of patients with quantitative processing results (the Q2 database), and a subset of patients from Q2 containing clinical data manually transcribed from the hospital information system (the CLINICAL database). To test interobserver variability, a second physician transcriber reviewed 50 randomly selected patients in the hospital information system and tabulated 2 clinical data items: hydronephrosis and presence of a current stent. The CLINICAL database was developed in stages and contains 342 fields comprising demographic information, clinical history, and findings from up to 11 radiologic procedures. A scripted algorithm is used to reliably match records present in both Q2 and CLINICAL. An Interactive Data Language program then combines data from the 2 databases into an XML (extensible markup language) file for use by the decision support system. A text file is constructed and saved for review by physicians. RENAL contains 2,222 records, Q2 contains 456 records, and CLINICAL contains 152 records. The interobserver variability testing found a 95% match between the 2 observers for presence or absence of ureteral stent (κ = 0.52), a 75% match for hydronephrosis based on narrative summaries of hospitalizations and clinical visits (κ = 0.41), and a 92% match for hydronephrosis based on the imaging report (κ = 0.84). We have developed a relational database system to integrate the quantitative results of MAG3 image

  10. Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program

    Science.gov (United States)

    Lomatch, Diane; Truax, Terry; Savage, Peter

    1981-01-01

    A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.

  11. Clinical databases in physical therapy.

    NARCIS (Netherlands)

    Swinkels, I.C.S.; Ende, C.H.M. van den; Bakker, D. de; Wees, Ph.J van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists

  12. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  13. Database on veterinary clinical research in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Albrecht, Henning

    2010-07-01

    The aim of the present report is to provide an overview of the first database on clinical research in veterinary homeopathy. Detailed searches in the database 'Veterinary Clinical Research-Database in Homeopathy' (http://www.carstens-stiftung.de/clinresvet/index.php). The database contains about 200 entries of randomised clinical trials, non-randomised clinical trials, observational studies, drug provings, case reports and case series. Twenty-two clinical fields are covered and eight different groups of species are included. The database is free of charge and open to all interested veterinarians and researchers. The database enables researchers and veterinarians, sceptics and supporters to get a quick overview of the status of veterinary clinical research in homeopathy and alleviates the preparation of systematical reviews or may stimulate reproductions or even new studies. 2010 Elsevier Ltd. All rights reserved.

  14. Practice databases and their uses in clinical research.

    Science.gov (United States)

    Tierney, W M; McDonald, C J

    1991-04-01

    A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.

  15. Relational databases for rare disease study: application to vascular anomalies.

    Science.gov (United States)

    Perkins, Jonathan A; Coltrera, Marc D

    2008-01-01

    To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.

  16. [A web-based integrated clinical database for laryngeal cancer].

    Science.gov (United States)

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  17. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  18. Integrating query of relational and textual data in clinical databases: a case study.

    Science.gov (United States)

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  19. Bluetooth wireless database for scoliosis clinics.

    Science.gov (United States)

    Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K

    2003-05-01

    A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.

  20. Developing a stone database for clinical practice.

    Science.gov (United States)

    Turney, Benjamin W; Noble, Jeremy G; Reynard, John M

    2011-09-01

    Our objective was to design an intranet-based database to streamline stone patient management and data collection. The system developers used a rapid development approach that removed the need for laborious and unnecessary documentation, instead focusing on producing a rapid prototype that could then be altered iteratively. By using open source development software and website best practice, the development cost was kept very low in comparison with traditional clinical applications. Information about each patient episode can be entered via a user-friendly interface. The bespoke electronic stone database removes the need for handwritten notes, dictation, and typing. From the database, files may be automatically generated for clinic letters, operation notes. and letters to family doctors. These may be printed or e-mailed from the database. Data may be easily exported for audits, coding, and research. Data collection remains central to medical practice, to improve patient safety, to analyze medical and surgical outcomes, and to evaluate emerging treatments. Establishing prospective data collection is crucial to this process. In the current era, we have the opportunity to embrace available technology to facilitate this process. The database template could be modified for use in other clinics. The database that we have designed helps to provide a modern and efficient clinical stone service.

  1. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  2. Thoughts toward a clinical database of architecture: evidence, complexity, and impact

    Directory of Open Access Journals (Sweden)

    Leonard R. Bachman

    2012-10-01

    Full Text Available This paper examines how architecture is building a clinical database similar to that of law and medicine and is developing this database for the purposes of acquiring complex design insight. This emerging clinical branch of architectural knowledge exceeds the scope of everyday experience of physical form and can thus be shown to enable a more satisfying scale of design thinking. It is argued that significant transformational kinds of professional transparency and accountability are thus intensifying. The tactics and methods of this paper are to connect previously disparate historical and contemporary events that mark the evolution of this database and then to fold those events into an explanatory narrative concerning clinical design practice. Beginning with architecture’s use of precedent (Collins 1971, the formulation of design as complex problems (Rittel and Webber 1973, high performance buildings to meet the crisis of climate change, social mandates of postindustrial society (Bell 1973, and other roots of evidence, the paper then elaborates the themes in which this database is evolving. Such themes include post-occupancy evaluation (Bordass and Leaman 2005, continuous commissioning, performance simulation, digital instrumentation, automation, and other modes of data collection in buildings. Finally, the paper concludes with some anticipated impacts that such a clinical database might have on design practice and how their benefits can be achieved through new interdisciplinary relations between academia and practice.

  3. Relational Databases and Biomedical Big Data.

    Science.gov (United States)

    de Silva, N H Nisansa D

    2017-01-01

    In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.

  4. Features of TMR for a Successful Clinical and Research Database

    OpenAIRE

    Pryor, David B.; Stead, William W.; Hammond, W. Edward; Califf, Robert M.; Rosati, Robert A.

    1982-01-01

    A database can be used for clinical practice and for research. The design of the database is important if both uses are to succeed. A clinical database must be efficient and flexible. A research database requires consistent observations recorded in a format which permits complete recall of the experience. In addition, the database should be designed to distinguish between missing data and negative responses, and to minimize transcription errors during the recording process.

  5. A national drug related problems database: evaluation of use in practice, reliability and reproducibility

    DEFF Research Database (Denmark)

    Kjeldsen, Lene Juel; Birkholm, Trine; Fischer, Hanne Lis

    2014-01-01

    Background A drug related problems database (DRP-database) was developed on request by clinical pharmacists. The information from the DRP-database has only been used locally e.g. to identify focus areas and to communicate identified DRPs to the hospital wards. Hence the quality of the data...... by clinical pharmacists with categorization performed by the project group. Reproducibility was explored by re-categorization of a sample of existing records in the DRP-database by two project group members individually. Main outcome measures Observed proportion of agreement and Fleiss' kappa as measures...... reliability study of 34 clinical pharmacists showed high inter-rater reliability with the project group (Fleiss' kappa = 0.79 with 95 % CI (0.70; 0.88)), and the reproducibility study also documented high inter-rater reliability of a sample of 379 records from the DRP-database re-categorized by two project...

  6. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  7. The evolution of a clinical database: from local to standardized clinical languages.

    OpenAIRE

    Prophet, C. M.

    2000-01-01

    For more than twenty years, the University of Iowa Hospitals and Clinics Nursing Informatics (UIHC NI) has been developing a clinical database to support patient care planning and documentation in the INFORMM NIS (Information Network for Online Retrieval & Medical Management Nursing Information System). Beginning in 1992, the database content was revised to standardize orders and to incorporate the Standardized Nursing Languages (SNLs) of the North American Nursing Diagnosis Association (NAND...

  8. Online database for documenting clinical pathology resident education.

    Science.gov (United States)

    Hoofnagle, Andrew N; Chou, David; Astion, Michael L

    2007-01-01

    Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.

  9. Data Migration between Document-Oriented and Relational Databases

    OpenAIRE

    Bogdan Walek; Cyril Klimes

    2012-01-01

    Current tools for data migration between documentoriented and relational databases have several disadvantages. We propose a new approach for data migration between documentoriented and relational databases. During data migration the relational schema of the target (relational database) is automatically created from collection of XML documents. Proposed approach is verified on data migration between document-oriented database IBM Lotus/ Notes Domino and relational database...

  10. A Relational Algebra Query Language for Programming Relational Databases

    Science.gov (United States)

    McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole

    2011-01-01

    In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…

  11. Automating Relational Database Design for Microcomputer Users.

    Science.gov (United States)

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  12. CDKD: a clinical database of kidney diseases

    Directory of Open Access Journals (Sweden)

    Singh Sanjay

    2012-04-01

    Full Text Available Abstract Background The main function of the kidneys is to remove waste products and excess water from the blood. Loss of kidney function leads to various health issues, such as anemia, high blood pressure, bone disease, disorders of cholesterol. The main objective of this database system is to store the personal and laboratory investigatory details of patients with kidney disease. The emphasis is on experimental results relevant to quantitative renal physiology, with a particular focus on data relevant for evaluation of parameters in statistical models of renal function. Description Clinical database of kidney diseases (CDKD has been developed with patient confidentiality and data security as a top priority. It can make comparative analysis of one or more parameters of patient’s record and includes the information of about whole range of data including demographics, medical history, laboratory test results, vital signs, personal statistics like age and weight. Conclusions The goal of this database is to make kidney-related physiological data easily available to the scientific community and to maintain & retain patient’s record. As a Web based application it permits physician to see, edit and annotate a patient record from anywhere and anytime while maintaining the confidentiality of the personal record. It also allows statistical analysis of all data.

  13. An automated database case definition for serious bleeding related to oral anticoagulant use.

    Science.gov (United States)

    Cunningham, Andrew; Stein, C Michael; Chung, Cecilia P; Daugherty, James R; Smalley, Walter E; Ray, Wayne A

    2011-06-01

    Bleeding complications are a serious adverse effect of medications that prevent abnormal blood clotting. To facilitate epidemiologic investigations of bleeding complications, we developed and validated an automated database case definition for bleeding-related hospitalizations. The case definition utilized information from an in-progress retrospective cohort study of warfarin-related bleeding in Tennessee Medicaid enrollees 30 years of age or older. It identified inpatient stays during the study period of January 1990 to December 2005 with diagnoses and/or procedures that indicated a current episode of bleeding. The definition was validated by medical record review for a sample of 236 hospitalizations. We reviewed 186 hospitalizations that had medical records with sufficient information for adjudication. Of these, 165 (89%, 95%CI: 83-92%) were clinically confirmed bleeding-related hospitalizations. An additional 19 hospitalizations (10%, 7-15%) were adjudicated as possibly bleeding-related. Of the 165 clinically confirmed bleeding-related hospitalizations, the automated database and clinical definitions had concordant anatomical sites (gastrointestinal, cerebral, genitourinary, other) for 163 (99%, 96-100%). For those hospitalizations with sufficient information to distinguish between upper/lower gastrointestinal bleeding, the concordance was 89% (76-96%) for upper gastrointestinal sites and 91% (77-97%) for lower gastrointestinal sites. A case definition for bleeding-related hospitalizations suitable for automated databases had a positive predictive value of between 89% and 99% and could distinguish specific bleeding sites. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Class dependency of fuzzy relational database using relational calculus and conditional probability

    Science.gov (United States)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  15. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    Science.gov (United States)

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  16. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  17. Study of relational nuclear databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Tinjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Liu Chi; Chen Jiaer; Huang Xiaolong

    2004-01-01

    A relational nuclear database management and web-based services software system has been developed. Its objective is to allow users to access numerical and graphical representation of nuclear data and to easily reconstruct nuclear data in original standardized formats from the relational databases. It presents 9 relational nuclear libraries: 5 ENDF format neutron reaction databases (BROND), CENDL, ENDF, JEF and JENDL), the ENSDF database, the EXFOR database, the IAEA Photonuclear Data Library and the charged particle reaction data from the FENDL database. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software, and are platform-independent. The first version of this software was officially released in September 2001

  18. BIOSPIDA: A Relational Database Translator for NCBI.

    Science.gov (United States)

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  19. O-ODM Framework for Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Rombaldo Jr

    2012-09-01

    Full Text Available Object-Relational Databases introduce new features which allow manipulating objects in databases. At present, many DBMS offer resources to manipulate objects in database, but most application developers just map class to relations tables, failing to exploit the O-R model strength. The lack of tools that aid the database project contributes to this situation. This work presents O-ODM (Object-Object Database Mapping, a persistent framework that maps objects from OO applications to database objects. Persistent Frameworks have been used to aid developers, managing all access to DBMS. This kind of tool allows developers to persist objects without solid knowledge about DBMSs and specific languages, improving the developers’ productivity, mainly when a different DBMS is used. The results of some experiments using O-ODM are shown.

  20. A Relational Database System for Student Use.

    Science.gov (United States)

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  1. Glycemic control and diabetes-related health care costs in type 2 diabetes; retrospective analysis based on clinical and administrative databases.

    Science.gov (United States)

    Degli Esposti, Luca; Saragoni, Stefania; Buda, Stefano; Sturani, Alessandra; Degli Esposti, Ezio

    2013-01-01

    Diabetes is one of the most prevalent chronic diseases, and its prevalence is predicted to increase in the next two decades. Diabetes imposes a staggering financial burden on the health care system, so information about the costs and experiences of collecting and reporting quality measures of data is vital for practices deciding whether to adopt quality improvements or monitor existing initiatives. The aim of this study was to quantify the association between health care costs and level of glycemic control in patients with type 2 diabetes using clinical and administrative databases. A retrospective analysis using a large administrative database and a clinical registry containing laboratory results was performed. Patients were subdivided according to their glycated hemoglobin level. Multivariate analyses were used to control for differences in potential confounding factors, including age, gender, Charlson comorbidity index, presence of dyslipidemia, hypertension, or cardiovascular disease, and degree of adherence with antidiabetic drugs among the study groups. Of the total population of 700,000 subjects, 31,022 were identified as being diabetic (4.4% of the entire population). Of these, 21,586 met the study inclusion criteria. In total, 31.5% of patients had very poor glycemic control and 25.7% had excellent control. Over 2 years, the mean diabetes-related cost per person was: €1291.56 in patients with excellent control; €1545.99 in those with good control; €1584.07 in those with fair control; €1839.42 in those with poor control; and €1894.80 in those with very poor control. After adjustment, compared with the group having excellent control, the estimated excess cost per person associated with the groups with good control, fair control, poor control, and very poor control was €219.28, €264.65, €513.18, and €564.79, respectively. Many patients showed suboptimal glycemic control. Lower levels of glycated hemoglobin were associated with lower diabetes-related

  2. Generic Entity Resolution in Relational Databases

    Science.gov (United States)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  3. Relational Database Design in Information Science Education.

    Science.gov (United States)

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  4. Completeness and validity in a national clinical thyroid cancer database

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Mathiesen, Jes Sloth; Krogdahl, Annelise

    2014-01-01

    cancer database: DATHYRCA. STUDY DESIGN AND SETTING: National prospective cohort. Denmark; population 5.5 million. Completeness of case ascertainment was estimated by the independent case ascertainment method using three governmental registries as a reference. The reabstracted record method was used...... to appraise the validity. For validity assessment 100 cases were randomly selected from the DATHYRCA database; medical records were used as a reference. RESULT: The database held 1934 cases of thyroid carcinoma and completeness of case ascertainment was estimated to 90.9%. Completeness of registration......BACKGROUND: Although a prospective national clinical thyroid cancer database (DATHYRCA) has been active in Denmark since January 1, 1996, no assessment of data quality has been performed. The purpose of the study was to evaluate completeness and data validity in the Danish national clinical thyroid...

  5. Assessment of COPD-related outcomes via a national electronic medical record database.

    Science.gov (United States)

    Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana

    2008-01-01

    The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.

  6. Performance assessment of EMR systems based on post-relational database.

    Science.gov (United States)

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  7. Enhanced DIII-D Data Management Through a Relational Database

    Science.gov (United States)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  8. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  9. MARC and Relational Databases.

    Science.gov (United States)

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  10. Glycemic control and diabetes-related health care costs in type 2 diabetes; retrospective analysis based on clinical and administrative databases

    Directory of Open Access Journals (Sweden)

    Degli Esposti L

    2013-05-01

    Full Text Available Luca Degli Esposti,1 Stefania Saragoni,1 Stefano Buda,1 Alessandra Sturani,2 Ezio Degli Esposti11CliCon Srl, Health, Economics and Outcomes Research, Ravenna, Italy; 2Nephrology and Dialysis Unit, Santa Maria delle Croci Hospital, Ravenna, ItalyBackground: Diabetes is one of the most prevalent chronic diseases, and its prevalence is predicted to increase in the next two decades. Diabetes imposes a staggering financial burden on the health care system, so information about the costs and experiences of collecting and reporting quality measures of data is vital for practices deciding whether to adopt quality improvements or monitor existing initiatives. The aim of this study was to quantify the association between health care costs and level of glycemic control in patients with type 2 diabetes using clinical and administrative databases.Methods: A retrospective analysis using a large administrative database and a clinical registry containing laboratory results was performed. Patients were subdivided according to their glycated hemoglobin level. Multivariate analyses were used to control for differences in potential confounding factors, including age, gender, Charlson comorbidity index, presence of dyslipidemia, hypertension, or cardiovascular disease, and degree of adherence with antidiabetic drugs among the study groups.Results: Of the total population of 700,000 subjects, 31,022 were identified as being diabetic (4.4% of the entire population. Of these, 21,586 met the study inclusion criteria. In total, 31.5% of patients had very poor glycemic control and 25.7% had excellent control. Over 2 years, the mean diabetes-related cost per person was: €1291.56 in patients with excellent control; €1545.99 in those with good control; €1584.07 in those with fair control; €1839.42 in those with poor control; and €1894.80 in those with very poor control. After adjustment, compared with the group having excellent control, the estimated excess cost

  11. Long-Term Collaboration Network Based on ClinicalTrials.gov Database in the Pharmaceutical Industry

    Directory of Open Access Journals (Sweden)

    Heyoung Yang

    2018-01-01

    Full Text Available Increasing costs, risks, and productivity problems in the pharmaceutical industry are important recent issues in the biomedical field. Open innovation is proposed as a solution to these issues. However, little statistical analysis related to collaboration in the pharmaceutical industry has been conducted so far. Meanwhile, not many cases have analyzed the clinical trials database, even though it is the information source with the widest coverage for the pharmaceutical industry. The purpose of this study is to test the clinical trials information as a probe for observing the status of the collaboration network and open innovation in the pharmaceutical industry. This study applied the social network analysis method to clinical trials data from 1980 to 2016 in ClinicalTrials.gov. Data were divided into four time periods—1980s, 1990s, 2000s, and 2010s—and the collaboration network was constructed for each time period. The characteristic of each network was investigated. The types of agencies participating in the clinical trials were classified as a university, national institute, company, or other, and the major players in the collaboration networks were identified. This study showed some phenomena related to the pharmaceutical industry that could provide clues to policymakers about open innovation. If follow-up studies were conducted, the utilization of the clinical trial database could be further expanded, which is expected to help open innovation in the pharmaceutical industry.

  12. An Introduction to the DB Relational Database Management System

    OpenAIRE

    Ward, J.R.

    1982-01-01

    This paper is an introductory guide to using the Db programs to maintain and query a relational database on the UNIX operating system. In the past decade. increasing interest has been shown in the development of relational database management systems. Db is an attempt to incorporate a flexible and powerful relational database system within the user environment presented by the UNIX operating system. The family of Db programs is useful for maintaining a database of information that i...

  13. Existing data sources for clinical epidemiology: Aarhus University Clinical Trial Candidate Database, Denmark.

    Science.gov (United States)

    Nørrelund, Helene; Mazin, Wiktor; Pedersen, Lars

    2014-01-01

    Denmark is facing a reduction in clinical trial activity as the pharmaceutical industry has moved trials to low-cost emerging economies. Competitiveness in industry-sponsored clinical research depends on speed, quality, and cost. Because Denmark is widely recognized as a region that generates high quality data, an enhanced ability to attract future trials could be achieved if speed can be improved by taking advantage of the comprehensive national and regional registries. A "single point-of-entry" system has been established to support collaboration between hospitals and industry. When assisting industry in early-stage feasibility assessments, potential trial participants are identified by use of registries to shorten the clinical trial startup times. The Aarhus University Clinical Trial Candidate Database consists of encrypted data from the Danish National Registry of Patients allowing an immediate estimation of the number of patients with a specific discharge diagnosis in each hospital department or outpatient specialist clinic in the Central Denmark Region. The free access to health care, thorough monitoring of patients who are in contact with the health service, completeness of registration at the hospital level, and ability to link all databases are competitive advantages in an increasingly complex clinical trial environment.

  14. Modification Semantics in Now-Relative Databases

    DEFF Research Database (Denmark)

    Torp, Kristian; Jensen, Christian Søndergaard; Snodgrass, R. T.

    2004-01-01

    Most real-world databases record time-varying information. In such databases, the notion of ??the current time,?? or NOW, occurs naturally and prominently. For example, when capturing the past states of a relation using begin and end time columns, tuples that are part of the current state have some...... past time as their begin time and NOW as their end time. While the semantics of such variable databases has been described in detail and is well understood, the modification of variable databases remains unexplored. This paper defines the semantics of modifications involving the variable NOW. More...... specifically,  the problems with modifications in the presence of NOW are explored, illustrating that the main problems are with modifications of tuples that reach into the future. The paper defines the semantics of modifications?including insertions, deletions, and updates?of databases without NOW, with NOW...

  15. amamutdb.no: A relational database for MAN2B1 allelic variants that compiles genotypes, clinical phenotypes, and biochemical and structural data of mutant MAN2B1 in α-mannosidosis.

    Science.gov (United States)

    Riise Stensland, Hilde Monica Frostad; Frantzen, Gabrio; Kuokkanen, Elina; Buvang, Elisabeth Kjeldsen; Klenow, Helle Bagterp; Heikinheimo, Pirkko; Malm, Dag; Nilssen, Øivind

    2015-06-01

    α-Mannosidosis is an autosomal recessive lysosomal storage disorder caused by mutations in the MAN2B1 gene, encoding lysosomal α-mannosidase. The disorder is characterized by a range of clinical phenotypes of which the major manifestations are mental impairment, hearing impairment, skeletal changes, and immunodeficiency. Here, we report an α-mannosidosis mutation database, amamutdb.no, which has been constructed as a publicly accessible online resource for recording and analyzing MAN2B1 variants (http://amamutdb.no). Our aim has been to offer structured and relational information on MAN2B1 mutations and genotypes along with associated clinical phenotypes. Classifying missense mutations, as pathogenic or benign, is a challenge. Therefore, they have been given special attention as we have compiled all available data that relate to their biochemical, functional, and structural properties. The α-mannosidosis mutation database is comprehensive and relational in the sense that information can be retrieved and compiled across datasets; hence, it will facilitate diagnostics and increase our understanding of the clinical and molecular aspects of α-mannosidosis. We believe that the amamutdb.no structure and architecture will be applicable for the development of databases for any monogenic disorder. © 2015 WILEY PERIODICALS, INC.

  16. Constructing a Geology Ontology Using a Relational Database

    Science.gov (United States)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  17. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  18. ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS

    Directory of Open Access Journals (Sweden)

    Alexander V. Boichenko

    2014-01-01

    Full Text Available This article analyzes the main methods of scalingdatabases (replication, sharding and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB, that takesinto account the specifics of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749.

  19. Characterization of drug-related problems identified by clinical pharmacy staff at Danish hospitals

    DEFF Research Database (Denmark)

    Kjeldsen, Lene Juel; Birkholm, Trine; Fischer, Hanne

    2014-01-01

    Background In 2010, a database of drug related problems (DRPs) was implemented to assist clinical pharmacy staff in documenting clinical pharmacy activities locally. A study of quality, reliability and generalisability showed that national analyses of the data could be conducted. Analyses...... at the national level may help identify and prevent DRPs by performing national interventions. Objective The aim of the study was to explore the DRP characteristics as documented by clinical pharmacy staff at hospital pharmacies in the Danish DRP-database during a 3-year period. Setting Danish hospital pharmacies....... Method Data documented in the DRP-database during the initial 3 years after implementation were analyzed retrospectively. The DRP-database contains DRPs reported at hospitals by clinical pharmacy staff. The analyses focused on DRP categories, implementation rates and drugs associated with the DRPs. Main...

  20. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  1. Studies on preparation of the database system for clinical records of atomic bomb survivors

    International Nuclear Information System (INIS)

    Nakamura, Tsuyoshi

    1981-01-01

    Construction of the database system aimed at multipurpose application of data on clinical medicine was studied through the preparation of database system for clinical records of atomic bomb survivors. The present database includes the data about 110,000 atomic bomb survivors in Nagasaki City. This study detailed: (1) Analysis of errors occurring in a period from generation of data in the clinical field to input into the database, and discovery of a highly precise, effective method of input. (2) Development of a multipurpose program for uniform processing of data on physical examinations from many organizations. (3) Development of a record linkage method for voluminous files which are essential in the construction of a large-scale medical information system. (4) A database model suitable for clinical research and a method for designing a segment suitable for physical examination data. (Chiba, N.)

  2. The database for aggregate analysis of ClinicalTrials.gov (AACT and subsequent regrouping by clinical specialty.

    Directory of Open Access Journals (Sweden)

    Asba Tasneem

    Full Text Available BACKGROUND: The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. METHODS/PRINCIPAL RESULTS: The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. CONCLUSIONS/SIGNIFICANCE: The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format. This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In

  3. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1990-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application-program setup and data records that are used in analysis. By creating a mapping of each relation in the database to a C record and providing general tools for relation (record) access, all the data in the database is available in a natural fashion to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase relations and the C structure is detailed with examples of C 'typedefs' and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. (orig.)

  4. Clinical and forensic signs related to opioids abuse.

    Science.gov (United States)

    Dinis-Oliveira, Ricardo Jorge; Carvalho, Felix; Moreira, Roxana; Duarte, Jose Alberto; Proenca, Jorge Brandao; Santos, Agostinho; Magalhaes, Teresa

    2012-12-01

    For a good performance in Clinical and Forensic Toxicology it is important to be aware of the biological and non-biological signs and symptoms related to xenobiotic exposure. This manuscript highlights and analyzes clinical and forensic imaging related to opioids abuse critically. Particularly, respiratory depression, track marks and hemorrhages, skin "popping", practices of phlebotomy, tissue necrosis and ulceration, dermatitis, tongue hyperpigmentation, "coma blisters", intra-arterial administration, candidiasis, wounds associated with anthrax or clostridium contaminated heroin, desomorphine related lesions and characteristic non-biological evidences are some commonly reported findings in opioids abuse, which will be discussed. For this purpose, clinical and forensic cases from our database (National Institute of Legal Medicine and Forensic Sciences, North Branch, Portugal), in addition to literature data, are reviewed.

  5. Why Save Your Course as a Relational Database?

    Science.gov (United States)

    Hamilton, Gregory C.; Katz, David L.; Davis, James E.

    2000-01-01

    Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)

  6. Jelly Views : Extending Relational Database Systems Toward Deductive Database Systems

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2004-01-01

    Full Text Available This paper regards the Jelly View technology, which provides a new, practical methodology for knowledge decomposition, storage, and retrieval within Relational Database Management Systems (RDBMS. Intensional Knowledge clauses (rules are decomposed and stored in the RDBMS founding reusable components. The results of the rule-based processing are visible as regular views, accessible through SQL. From the end-user point of view the processing capability becomes unlimited (arbitrarily complex queries can be constructed using Intensional Knowledge, while the most external queries are expressed with standard SQL. The RDBMS functionality becomes extended toward that of the Deductive Databases

  7. Advantages and disadvantages of relational and non-relational (NoSQL) databases for analytical tasks

    OpenAIRE

    Klapač, Milan

    2015-01-01

    This work focuses on NoSQL databases, their use for analytical tasks and on comparison of NoSQL databases with relational and OLAP databases. The aim is to analyse the benefits of NoSQL databases and their use for analytical purposes. The first part presents the basic principles of Business Intelligence, Data Warehousing, and Big Data. The second part deals with the key features of relational and NoSQL databases. The last part of the thesis describes the properties of four basic types of NoSQ...

  8. Relational Database Technology: An Overview.

    Science.gov (United States)

    Melander, Nicole

    1987-01-01

    Describes the development of relational database technology as it applies to educational settings. Discusses some of the new tools and models being implemented in an effort to provide educators with technologically advanced ways of answering questions about education programs and data. (TW)

  9. Object-relational database design-exploiting object orientation at the ...

    African Journals Online (AJOL)

    This paper applies the object-relational database paradigm in the design of a Health Management Information System. The class design, mapping of object classes to relational tables, the representation of inheritance hierarchies, and the appropriate database schema are all examined. Keywords: object relational ...

  10. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  11. Genealogical databases as a tool for extending follow-up in clinical reviews.

    Science.gov (United States)

    Ho, Thuy-Van; Chowdhury, Naweed; Kandl, Christopher; Hoover, Cindy; Robinson, Ann; Hoover, Larry

    2016-08-01

    Long-term follow-up in clinical reviews often presents significant difficulty with conventional medical records alone. Publicly accessible genealogical databases such as Ancestry.com provide another avenue for obtaining extended follow-up and added outcome information. No previous studies have described the use of genealogical databases in the follow-up of individual patients. Ancestry.com, the largest genealogical database in the United States, houses extensive demographic data on an increasing number of Americans. In a recent retrospective review of esthesioneuroblastoma patients treated at our institution, we used this resource to ascertain the outcomes of patients otherwise lost to follow-up. Additional information such as quality of life and supplemental treatments the patient may have received at home was obtained through direct contact with living relatives. The use of Ancestry.com resulted in a 25% increase (20 months) in follow-up duration as well as incorporation of an additional 7 patients in our study (18%) who would otherwise not have had adequate hospital chart data for inclusion. Many patients within this subset had more advanced disease or were remotely located from our institution. As such, exclusion of these outliers can impact the quality of subsequent outcome analysis. Online genealogical databases provide a unique resource of public information that is acceptable to institutional review boards for patient follow-up in clinical reviews. Utilization of Ancestry.com data led to significant improvement in follow-up duration and increased the number of patients with sufficient data that could be included in our retrospective study. © 2016 ARS-AAOA, LLC.

  12. Pro SQL Server 2012 relational database design and implementation

    CERN Document Server

    Davidson, Louis

    2012-01-01

    Learn effective and scalable database design techniques in a SQL Server environment. Pro SQL Server 2012 Relational Database Design and Implementation covers everything from design logic that business users will understand, all the way to the physical implementation of design in a SQL Server database. Grounded in best practices and a solid understanding of the underlying theory, Louis Davidson shows how to "get it right" in SQL Server database design and lay a solid groundwork for the future use of valuable business data. Gives a solid foundation in best practices and relational theory Covers

  13. A C programmer's view of a relational database

    International Nuclear Information System (INIS)

    Clifford, T.; Katz, R.; Griffiths, C.

    1989-01-01

    The AGS Distributed Control System (AGSDCS) uses a relational database (Interbase) for the storage of all data on the host system network. This includes the static data which describes the components of the accelerator complex, as well as data for application program setup and data records that are used in analysis. By creating a mapping of each elation in the database to a C record and providing general tools for relation (record) across, all the data in the database is available in a natural fashion (in structures) to all the C programs on any of the nodes of the control system. In this paper the correspondence between the Interbase elations and the C structure is detailed with examples of C typedefs and relation definitions. It is also shown how the relations can be put into memory and linked (related) together when fast access is needed by programs. 1 ref., 2 tabs

  14. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  15. The Danish Testicular Cancer database

    Directory of Open Access Journals (Sweden)

    Daugaard G

    2016-10-01

    Full Text Available Gedske Daugaard,1 Maria Gry Gundgaard Kier,1 Mikkel Bandak,1 Mette Saksø Mortensen,1 Heidi Larsson,2 Mette Søgaard,2 Birgitte Groenkaer Toft,3 Birte Engvad,4 Mads Agerbæk,5 Niels Vilstrup Holm,6 Jakob Lauritsen1 1Department of Oncology 5073, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 2Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 3Department of Pathology, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 4Department of Pathology, Odense University Hospital, Odense, 5Department of Oncology, Aarhus University Hospital, Aarhus, 6Department of Oncology, Odense University Hospital, Odense, Denmark Aim: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database. The aim is to improve the quality of care for patients with testicular cancer (TC in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. Study population: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. Main variables and descriptive data: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions

  16. A study on relational ENSDF databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Song Xiangxiang; Ye Weiguo; Liu Wenlong; Feng Yuqing; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Guo Zhiyu; Huang Xiaolong; Liu Tingjin; China Inst. of Atomic Energy, Beijing

    2007-01-01

    A relational ENSDF library software is designed and released. Using relational databases, object-oriented programming and web-based technology, this software offers online data services of a centralized repository of data, including international ENSDF files for nuclear structure and decay data. The software can easily reconstruct nuclear data in original ENSDF format from the relational database. The computer programs providing support for database management and online data services via the Internet are based on the Linux implementation of PHP and the MySQL software, and platform independent in a wider sense. (authors)

  17. Index Selection in Relational Databases

    NARCIS (Netherlands)

    Choenni, R.S.; Blanken, Henk; Chang, S.C.

    Intending to develop a tool which aims to support the physical design of relational databases can not be done without considering the problem of index selection. Generally the problem is split into a primary and secondary index selection problem and the selection is done per table. Whereas much

  18. “NaKnowBase”: A Nanomaterials Relational Database

    Science.gov (United States)

    NaKnowBase is an internal relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations...

  19. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Science.gov (United States)

    Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio

    2016-01-01

    To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on

  20. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Directory of Open Access Journals (Sweden)

    Xavier Bonfill

    Full Text Available To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database.Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own.We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be

  1. “NaKnowBase”: A Nanomaterials Relational Database

    Science.gov (United States)

    NaKnowBase is a relational database populated with data from peer-reviewed ORD nanomaterials research publications. The database focuses on papers describing the actions of nanomaterials in environmental or biological media including their interactions, transformations and poten...

  2. The Danish Testicular Cancer database.

    Science.gov (United States)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  3. Exploiting relational database technology in a GIS

    Science.gov (United States)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  4. FORWARD: A Registry and Longitudinal Clinical Database to Study Fragile X Syndrome.

    Science.gov (United States)

    Sherman, Stephanie L; Kidd, Sharon A; Riley, Catharine; Berry-Kravis, Elizabeth; Andrews, Howard F; Miller, Robert M; Lincoln, Sharyn; Swanson, Mark; Kaufmann, Walter E; Brown, W Ted

    2017-06-01

    Advances in the care of patients with fragile X syndrome (FXS) have been hampered by lack of data. This deficiency has produced fragmentary knowledge regarding the natural history of this condition, healthcare needs, and the effects of the disease on caregivers. To remedy this deficiency, the Fragile X Clinic and Research Consortium was established to facilitate research. Through a collective effort, the Fragile X Clinic and Research Consortium developed the Fragile X Online Registry With Accessible Research Database (FORWARD) to facilitate multisite data collection. This report describes FORWARD and the way it can be used to improve health and quality of life of FXS patients and their relatives and caregivers. FORWARD collects demographic information on individuals with FXS and their family members (affected and unaffected) through a 1-time registry form. The longitudinal database collects clinician- and parent-reported data on individuals diagnosed with FXS, focused on those who are 0 to 24 years of age, although individuals of any age can participate. The registry includes >2300 registrants (data collected September 7, 2009 to August 31, 2014). The longitudinal database includes data on 713 individuals diagnosed with FXS (data collected September 7, 2012 to August 31, 2014). Longitudinal data continue to be collected on enrolled patients along with baseline data on new patients. FORWARD represents the largest resource of clinical and demographic data for the FXS population in the United States. These data can be used to advance our understanding of FXS: the impact of cooccurring conditions, the impact on the day-to-day lives of individuals living with FXS and their families, and short-term and long-term outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  5. [Role and management of cancer clinical database in the application of gastric cancer precision medicine].

    Science.gov (United States)

    Li, Yuanfang; Zhou, Zhiwei

    2016-02-01

    Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.

  6. [Relational database for urinary stone ambulatory consultation. Assessment of initial outcomes].

    Science.gov (United States)

    Sáenz Medina, J; Páez Borda, A; Crespo Martinez, L; Gómez Dos Santos, V; Barrado, C; Durán Poveda, M

    2010-05-01

    To create a relational database for monitoring lithiasic patients. We describe the architectural details and the initial results of the statistical analysis. Microsoft Access 2002 was used as template. Four different tables were constructed to gather demographic data (table 1), clinical and laboratory findings (table 2), stone features (table 3) and therapeutic approach (table 4). For a reliability analysis of the database the number of correctly stored data was gathered. To evaluate the performance of the database, a prospective analysis was conducted, from May 2004 to August 2009, on 171 stone free patients after treatment (EWSL, surgery or medical) from a total of 511 patients stored in the database. Lithiasic status (stone free or stone relapse) was used as primary end point, while demographic factors (age, gender), lithiasic history, upper urinary tract alterations and characteristics of the stone (side, location, composition and size) were considered as predictive factors. An univariate analysis was conducted initially by chi square test and supplemented by Kaplan Meier estimates for time to stone recurrence. A multiple Cox proportional hazards regression model was generated to jointly assess the prognostic value of the demographic factors and the predictive value of stones characteristics. For the reliability analysis 22,084 data were available corresponding to 702 consultations on 511 patients. Analysis of data showed a recurrence rate of 85.4% (146/171, median time to recurrence 608 days, range 70-1758). In the univariate and multivariate analysis, none of the factors under consideration had a significant effect on recurrence rate (p=ns). The relational database is useful for monitoring patients with urolithiasis. It allows easy control and update, as well as data storage for later use. The analysis conducted for its evaluation showed no influence of demographic factors and stone features on stone recurrence.

  7. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  8. Validating the extract, transform, load process used to populate a large clinical research database.

    Science.gov (United States)

    Denney, Michael J; Long, Dustin M; Armistead, Matthew G; Anderson, Jamie L; Conway, Baqiyyah N

    2016-10-01

    Informaticians at any institution that are developing clinical research support infrastructure are tasked with populating research databases with data extracted and transformed from their institution's operational databases, such as electronic health records (EHRs). These data must be properly extracted from these source systems, transformed into a standard data structure, and then loaded into the data warehouse while maintaining the integrity of these data. We validated the correctness of the extract, load, and transform (ETL) process of the extracted data of West Virginia Clinical and Translational Science Institute's Integrated Data Repository, a clinical data warehouse that includes data extracted from two EHR systems. Four hundred ninety-eight observations were randomly selected from the integrated data repository and compared with the two source EHR systems. Of the 498 observations, there were 479 concordant and 19 discordant observations. The discordant observations fell into three general categories: a) design decision differences between the IDR and source EHRs, b) timing differences, and c) user interface settings. After resolving apparent discordances, our integrated data repository was found to be 100% accurate relative to its source EHR systems. Any institution that uses a clinical data warehouse that is developed based on extraction processes from operational databases, such as EHRs, employs some form of an ETL process. As secondary use of EHR data begins to transform the research landscape, the importance of the basic validation of the extracted EHR data cannot be underestimated and should start with the validation of the extraction process itself. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  10. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which......There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...

  11. Cross: an OWL wrapper for teasoning on relational databases

    NARCIS (Netherlands)

    Champin, P.A.; Houben, G.J.P.M.; Thiran, Ph.; Parent, C.; Schewe, K.D.; Storey, V.C.; Thalheim, B.

    2007-01-01

    One of the challenges of the Semantic Web is to integrate the huge amount of information already available on the standard Web, usually stored in relational databases. In this paper, we propose a formalization of a logic model of relational databases, and a transformation of that model into OWL, a

  12. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, D [University of Michigan Hospital, Ann Arbor, MI (United States)

    2015-06-15

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visual basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.

  13. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    International Nuclear Information System (INIS)

    Roberts, D

    2015-01-01

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visual basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server

  14. Repetitive Bibliographical Information in Relational Databases.

    Science.gov (United States)

    Brooks, Terrence A.

    1988-01-01

    Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)

  15. CORE-Hom: a powerful and exhaustive database of clinical trials in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Moss, Sian; Tournier, Alexander; Lüdtke, Rainer; Albrecht, Henning

    2014-10-01

    The CORE-Hom database was created to answer the need for a reliable and publicly available source of information in the field of clinical research in homeopathy. As of May 2014 it held 1048 entries of clinical trials, observational studies and surveys in the field of homeopathy, including second publications and re-analyses. 352 of the trials referenced in the database were published in peer reviewed journals, 198 of which were randomised controlled trials. The most often used remedies were Arnica montana (n = 103) and Traumeel(®) (n = 40). The most studied medical conditions were respiratory tract infections (n = 126) and traumatic injuries (n = 110). The aim of this article is to introduce the database to the public, describing and explaining the interface, features and content of the CORE-Hom database. Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  16. Implementation of a fuzzy relational database. Case study: academic tutoring

    Directory of Open Access Journals (Sweden)

    Ciro Saguay

    2017-02-01

    Full Text Available This paper describes the process of implementation of a diffused relational database in the practical case of the academic tutorials of the Faculty of Engineering Sciences of the Equinoctial Technological University (UTE. For the implementation, the ANSI-SPARC database architecture was used as the methodology, which abstracts the information into levels, at the external level the functional requirements were obtained, at the conceptual level, the diffused relational model was obtained. To achieve this model, we performed the transformation of the diffuse data through mathematical models using the Fuzzy-Lookup tool and at the physical level the diffused relational database was implemented. In addition, an user interface was developed using Java through which data is entered and queries are made to the diffused relational database to verify its operation.

  17. Development of Information Technology of Object-relational Databases Design

    Directory of Open Access Journals (Sweden)

    Valentyn A. Filatov

    2012-12-01

    Full Text Available The article is concerned with the development of information technology of object-relational databases design and study of object features infological and logical database schemes entities and connections.

  18. Database and Related Activities in Japan

    International Nuclear Information System (INIS)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi; Sakaue, Hiroyuki A.; Kato, Takako; Ding, Xiaobin; Morita, Shigeru; Kitajima, Masashi; Koike, Fumihiro; Nakamura, Nobuyuki; Sakamoto, Naoki; Sasaki, Akira; Skobelev, Igor; Tsuchida, Hidetsugu; Ulantsev, Artemiy; Watanabe, Tetsuya; Yamamoto, Norimasa

    2011-01-01

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition of W 26+ ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association ''Forum of Atomic and Molecular Data and Their Applications'' to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.

  19. Switching the Fermilab Accelerator Control System to a relational database

    International Nuclear Information System (INIS)

    Shtirbu, S.

    1993-01-01

    The accelerator control system (open-quotes ACNETclose quotes) at Fermilab is using a made-in-house, Assembly language, database. The database holds device information, which is mostly used for finding out how to read/set devices and how to interpret alarms. This is a very efficient implementation, but it lacks the needed flexibility and forces applications to store data in private/shared files. This database is being replaced by an off-the-shelf relational database (Sybase 2 ). The major constraints on switching are the necessity to maintain/improve response time and to minimize changes to existing applications. Innovative methods are used to help achieve the required performance, and a layer seven gateway simulates the old database for existing programs. The new database is running on a DEC ALPHA/VMS platform, and provides better performance. The switch is also exposing problems with the data currently stored in the database, and is helping in cleaning up erroneous data. The flexibility of the new relational database is going to facilitate many new applications in the future (e.g. a 3D presentation of device location). The new database is expected to fully replace the old database during this summer's shutdown

  20. Kliniske databaser i social epidemiologisk forskning

    DEFF Research Database (Denmark)

    Osler, Merete

    2009-01-01

    Danish researchers can link several databases on everything from medical records to socioeconomic data on jobs and salaries by use of an individual person id-number. This allows a number of clinical databases to be used in studies concerning the impact of social factors on healthcare-related outc......Danish researchers can link several databases on everything from medical records to socioeconomic data on jobs and salaries by use of an individual person id-number. This allows a number of clinical databases to be used in studies concerning the impact of social factors on healthcare...

  1. Schema Versioning for Multitemporal Relational Databases.

    Science.gov (United States)

    De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita

    1997-01-01

    Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…

  2. Quality standards for DNA sequence variation databases to improve clinical management under development in Australia

    Directory of Open Access Journals (Sweden)

    B. Bennetts

    2014-09-01

    Full Text Available Despite the routine nature of comparing sequence variations identified during clinical testing to database records, few databases meet quality requirements for clinical diagnostics. To address this issue, The Royal College of Pathologists of Australasia (RCPA in collaboration with the Human Genetics Society of Australasia (HGSA, and the Human Variome Project (HVP is developing standards for DNA sequence variation databases intended for use in the Australian clinical environment. The outputs of this project will be promoted to other health systems and accreditation bodies by the Human Variome Project to support the development of similar frameworks in other jurisdictions.

  3. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  4. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  5. Analysis of large databases in vascular surgery.

    Science.gov (United States)

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  6. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...... extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which...... is understandable to readers without specialised knowledge of statistics Udgivelsesdato: 2009/9/14...

  7. Relational databases for SSC design and control

    International Nuclear Information System (INIS)

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  8. Clinical Prediction Models for Cardiovascular Disease: The Tufts PACE CPM Database

    Science.gov (United States)

    Wessler, Benjamin S.; Lana Lai, YH; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S.; Kent, David M.

    2015-01-01

    Background Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease (CVD) there are numerous CPMs available though the extent of this literature is not well described. Methods and Results We conducted a systematic review for articles containing CPMs for CVD published between January 1990 through May 2012. CVD includes coronary heart disease (CHD), heart failure (HF), arrhythmias, stroke, venous thromboembolism (VTE) and peripheral vascular disease (PVD). We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. 717 (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions including 215 CPMs for patients with CAD, 168 CPMs for population samples, and 79 models for patients with HF. There are 77 distinct index/ outcome (I/O) pairings. Of the de novo models in this database 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. Conclusions There is an abundance of CPMs available for a wide assortment of CVD conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. PMID:26152680

  9. Heuristic program to design Relational Databases

    Directory of Open Access Journals (Sweden)

    Manuel Pereira Rosa

    2009-09-01

    Full Text Available The great development of today’s world determines that the world level of information increases day after day, however, the time allowed to transmit this information in the classrooms has not changed. Thus, the rational work in this respect is more than necessary. Besides, if for the solution of a given type of problem we do not have a working algorism, we have, first to look for a correct solution, then the heuristic programs are of paramount importance to succeed in these aspects. Having into consideration that the design of the database is, essentially, a process of problem resolution, this article aims at proposing a heuristic program for the design of the relating database.

  10. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2008-05-01

    Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  11. UNESCO Global Ethics Observatory: database on ethics related legislation and guidelines.

    NARCIS (Netherlands)

    Ang, T.W.; Have, H.A.M.J. ten; Solbakk, J.H.; Nys, H.

    2008-01-01

    The Database on Ethics Related Legislation and Guidelines was launched in March 2007 as the fourth database of the UNESCO Global Ethics Observatory system of databases in ethics of science and technology. The database offers a collection of legal instruments searchable by region, country, bioethical

  12. The Moroccan Genetic Disease Database (MGDD): a database for DNA variations related to inherited disorders and disease susceptibility.

    Science.gov (United States)

    Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid

    2014-03-01

    National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.

  13. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    Science.gov (United States)

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  14. Application of cloud database in the management of clinical data of patients with skin diseases.

    Science.gov (United States)

    Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning

    2015-04-01

    To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.

  15. A blind reversible robust watermarking scheme for relational databases.

    Science.gov (United States)

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.

  16. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... collection has been performed from 1984 to 2007 and from 2013 onward, respectively. MAIN VARIABLES AND DESCRIPTIVE DATA: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function...

  17. An Animated Introduction to Relational Databases for Many Majors

    Science.gov (United States)

    Dietrich, Suzanne W.; Goelman, Don; Borror, Connie M.; Crook, Sharon M.

    2015-01-01

    Database technology affects many disciplines beyond computer science and business. This paper describes two animations developed with images and color that visually and dynamically introduce fundamental relational database concepts and querying to students of many majors. The goal is for educators in diverse academic disciplines to incorporate the…

  18. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    Science.gov (United States)

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Database of Literature on Guided Imagery and Music and Related Topics

    DEFF Research Database (Denmark)

    Bonde, Lars Ole

    2015-01-01

    A March 2015 update of the largest international database on literature on Guided Imagery and Music and related topics.......A March 2015 update of the largest international database on literature on Guided Imagery and Music and related topics....

  20. An Ontology as a Tool for Representing Fuzzy Data in Relational Databases

    Directory of Open Access Journals (Sweden)

    Carmen Martinez-Cruz

    2012-11-01

    Full Text Available Several applications to represent classical or fuzzy data in databases have been developed in the last two decades. However, these representations present some limitations specially related with the system portability and complexity. Ontologies provides a mechanism to represent data in an implementation-independent and web-accessible way. To get advantage of this, in this paper, an ontology, that represents fuzzy relational database model, has been redefined to communicate users or applications with fuzzy data stored in fuzzy databases. The communication channel established between the ontology and any Relational Database Management System (RDBMS is analysed in depth throughout the text to justify some of the advantages of the system: expressiveness, portability and platform heterogeneity. Moreover, some tools have been developed to define and manage fuzzy and classical data in relational databases using this ontology. Even an application that performs fuzzy queries using the same technology is included in this proposal together with some examples using real databases.

  1. Managing XML Data to optimize Performance into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-06-01

    Full Text Available This paper propose some possibilities for manage XML data in order to optimize performance into object-relational databases. It is detailed the possibility of storing XML data into such databases, using for exemplification an Oracle database and there are tested some optimizing techniques of the queries over XMLType tables, like indexing and partitioning tables.

  2. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    Science.gov (United States)

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  3. Clinical Views: Object-Oriented Views for Clinical Databases

    Science.gov (United States)

    Portoni, Luisa; Combi, Carlo; Pinciroli, Francesco

    1998-01-01

    We present here a prototype of a clinical information system for the archiving and the management of multimedia and temporally-oriented clinical data related to PTCA patients. The system is based on an object-oriented DBMS and supports multiple views and view schemas on patients' data. Remote data access is supported too.

  4. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    Science.gov (United States)

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  5. Use of national clinical databases for informing and for evaluating health care policies.

    Science.gov (United States)

    Black, Nick; Tan, Stefanie

    2013-02-01

    Policy-makers and analysts could make use of national clinical databases either to inform or to evaluate meso-level (organisation and delivery of health care) and macro-level (national) policies. Reviewing the use of 15 of the best established databases in England, we identify and describe four published examples of each use. These show that policy-makers can either make use of the data itself or of research based on the database. For evaluating policies, the major advantages are the huge sample sizes available, the generalisability of the data, its immediate availability and historic information. The principal methodological challenges involve the need for risk adjustment and time-series analysis. Given their usefulness in the policy arena, there are several reasons why national clinical databases have not been used more, some due to a lack of 'push' by their custodians and some to the lack of 'pull' by policy-makers. Greater exploitation of these valuable resources would be facilitated by policy-makers' and custodians' increased awareness, minimisation of legal restrictions on data use, improvements in the quality of databases and a library of examples of applications to policy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  7. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    Science.gov (United States)

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed

  8. Nonmaterialized Relations and the Support of Information Retrieval Applications by Relational Database Systems.

    Science.gov (United States)

    Lynch, Clifford A.

    1991-01-01

    Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…

  9. Simple Logic for Big Problems: An Inside Look at Relational Databases.

    Science.gov (United States)

    Seba, Douglas B.; Smith, Pat

    1982-01-01

    Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…

  10. Persistent Functional Languages: Toward Functional Relational Databases

    NARCIS (Netherlands)

    Wevers, L.

    2014-01-01

    Functional languages provide new approaches to concurrency control, based on techniques such as lazy evaluation and memoization. We have designed and implemented a persistent functional language based on these ideas, which we plan to use for the implementation of a relational database system. With

  11. Clinical and forensic signs related to cocaine abuse.

    Science.gov (United States)

    Dinis-Oliveira, Ricardo Jorge; Carvalho, Félix; Duarte, José Alberto; Proença, Jorge Brandão; Santos, Agostinho; Magalhães, Teresa

    2012-03-01

    Good laboratory practice in toxicological analysis requires pre-analytical steps for collection of detailed information related to the suspected poisoning episodes, including biological and non-biological circumstantial evidences, which should be carefully scrutinized. This procedure provides great help to unveil the suspected cause of poisoning, to select the appropriate and correct samples to be analyzed and can facilitate the decision about the analytical techniques to perform. This implies a good knowledge of the signs related to acute and chronic intoxications by drugs of abuse. In this manuscript we highlight and discuss clinical and forensic imaging related to cocaine abuse, namely the midline destructive lesion, dental health, pseudoscleradermatous triad and crack hands, necrosis and gangrene of extremities and several other skin manifestations, reticular purpura, intracerebral and peripheral hemorrhages, angioneurotic edema, rhabdomyolysis, and crack lung. For this purpose, the state of the art on this topic is discussed, using clinical and forensic cases from our professional database in complement to images and mechanistic data from literature.

  12. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    Science.gov (United States)

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  13. Developing genomic knowledge bases and databases to support clinical management: current perspectives.

    Science.gov (United States)

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.

  14. Portfolio of prospective clinical trials including brachytherapy: an analysis of the ClinicalTrials.gov database.

    Science.gov (United States)

    Cihoric, Nikola; Tsikkinis, Alexandros; Miguelez, Cristina Gutierrez; Strnad, Vratislav; Soldatovic, Ivan; Ghadjar, Pirus; Jeremic, Branislav; Dal Pra, Alan; Aebersold, Daniel M; Lössl, Kristina

    2016-03-22

    To evaluate the current status of prospective interventional clinical trials that includes brachytherapy (BT) procedures. The records of 175,538 (100 %) clinical trials registered at ClinicalTrials.gov were downloaded on September 2014 and a database was established. Trials using BT as an intervention were identified for further analyses. The selected trials were manually categorized according to indication(s), BT source, applied dose rate, primary sponsor type, location, protocol initiator and funding source. We analyzed trials across 8 available trial protocol elements registered within the database. In total 245 clinical trials were identified, 147 with BT as primary investigated treatment modality and 98 that included BT as an optional treatment component or as part of the standard treatment. Academic centers were the most frequent protocol initiators in trials where BT was the primary investigational treatment modality (p < 0.01). High dose rate (HDR) BT was the most frequently investigated type of BT dose rate (46.3 %) followed by low dose rate (LDR) (42.0 %). Prostate was the most frequently investigated tumor entity in trials with BT as the primary treatment modality (40.1 %) followed by breast cancer (17.0 %). BT was rarely the primary investigated treatment modality for cervical cancer (6.8 %). Most clinical trials using BT are predominantly in early phases, investigator-initiated and with low accrual numbers. Current investigational activities that include BT mainly focus on prostate and breast cancers. Important questions concerning the optimal usage of BT will not be answered in the near future.

  15. Relational databases for conditions data and event selection in ATLAS

    International Nuclear Information System (INIS)

    Viegas, F; Hawkings, R; Dimitrov, G

    2008-01-01

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects

  16. Relational databases for conditions data and event selection in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Hawkings, R; Dimitrov, G [CERN, CH-1211 Geneve 23 (Switzerland)

    2008-07-15

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects.

  17. Sports medicine clinical trial research publications in academic medical journals between 1996 and 2005: an audit of the PubMed MEDLINE database.

    Science.gov (United States)

    Nichols, A W

    2008-11-01

    To identify sports medicine-related clinical trial research articles in the PubMed MEDLINE database published between 1996 and 2005 and conduct a review and analysis of topics of research, experimental designs, journals of publication and the internationality of authorships. Sports medicine research is international in scope with improving study methodology and an evolution of topics. Structured review of articles identified in a search of a large electronic medical database. PubMed MEDLINE database. Sports medicine-related clinical research trials published between 1996 and 2005. Review and analysis of articles that meet inclusion criteria. Articles were examined for study topics, research methods, experimental subject characteristics, journal of publication, lead authors and journal countries of origin and language of publication. The search retrieved 414 articles, of which 379 (345 English language and 34 non-English language) met the inclusion criteria. The number of publications increased steadily during the study period. Randomised clinical trials were the most common study type and the "diagnosis, management and treatment of sports-related injuries and conditions" was the most popular study topic. The knee, ankle/foot and shoulder were the most frequent anatomical sites of study. Soccer players and runners were the favourite study subjects. The American Journal of Sports Medicine had the highest number of publications and shared the greatest international diversity of authorships with the British Journal of Sports Medicine. The USA, Australia, Germany and the UK produced a good number of the lead authorships. In all, 91% of articles and 88% of journals were published in English. Sports medicine-related research is internationally diverse, clinical trial publications are increasing and the sophistication of research design may be improving.

  18. Set-oriented data mining in relational databases

    NARCIS (Netherlands)

    Houtsma, M.A.W.; Swami, Arun

    1995-01-01

    Data mining is an important real-life application for businesses. It is critical to find efficient ways of mining large data sets. In order to benefit from the experience with relational databases, a set-oriented approach to mining data is needed. In such an approach, the data mining operations are

  19. A role for relational databases in high energy physics software systems

    International Nuclear Information System (INIS)

    Lauer, R.; Slaughter, A.J.; Wolin, E.

    1987-01-01

    This paper presents the design and initial implementation of software which uses a relational database management system for storage and retrieval of real and Monte Carlo generated events from a charm and beauty spectrometer with a vertex detector. The purpose of the software is to graphically display and interactively manipulate the events, fit tracks and vertices and calculate physics quantities. The INGRES database forms the core of the system, while the DI3000 graphics package is used to plot the events. The paper introduces relational database concepts and their applicability to high energy physics data. It also evaluates the environment provided by INGRES, particularly its usefulness in code development and its Fortran interface. Specifics of the database design we have chosen are detailed as well. (orig.)

  20. The TJ-II Relational Database Access Library: A User's Guide

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A. B.; Vega, J.

    2003-01-01

    A relational database has been developed to store data representing physical values from TJ-II discharges. This new database complements the existing TJ-EI raw data database. This database resides in a host computer running Windows 2000 Server operating system and it is managed by SQL Server. A function library has been developed that permits remote access to these data from user programs running in computers connected to TJ-II local area networks via remote procedure cali. In this document a general description of the database and its organization are provided. Also given are a detailed description of the functions included in the library and examples of how to use these functions in computer programs written in the FORTRAN and C languages. (Author) 8 refs

  1. The standardization of data relational mode in the materials database for nuclear power engineering

    International Nuclear Information System (INIS)

    Wang Xinxuan

    1996-01-01

    A relational database needs standard data relation ships. The data relation ships include hierarchical structures and repeat set records. Code database is created and the relational database is created between spare parts and materials and properties of the materials. The data relation ships which are not standard are eliminated and all the relation modes are made to meet the demands of the 3NF (Third Norm Form)

  2. Portfolio of prospective clinical trials including brachytherapy: an analysis of the ClinicalTrials.gov database

    International Nuclear Information System (INIS)

    Cihoric, Nikola; Tsikkinis, Alexandros; Miguelez, Cristina Gutierrez; Strnad, Vratislav; Soldatovic, Ivan; Ghadjar, Pirus; Jeremic, Branislav; Dal Pra, Alan; Aebersold, Daniel M.; Lössl, Kristina

    2016-01-01

    To evaluate the current status of prospective interventional clinical trials that includes brachytherapy (BT) procedures. The records of 175,538 (100 %) clinical trials registered at ClinicalTrials.gov were downloaded on September 2014 and a database was established. Trials using BT as an intervention were identified for further analyses. The selected trials were manually categorized according to indication(s), BT source, applied dose rate, primary sponsor type, location, protocol initiator and funding source. We analyzed trials across 8 available trial protocol elements registered within the database. In total 245 clinical trials were identified, 147 with BT as primary investigated treatment modality and 98 that included BT as an optional treatment component or as part of the standard treatment. Academic centers were the most frequent protocol initiators in trials where BT was the primary investigational treatment modality (p < 0.01). High dose rate (HDR) BT was the most frequently investigated type of BT dose rate (46.3 %) followed by low dose rate (LDR) (42.0 %). Prostate was the most frequently investigated tumor entity in trials with BT as the primary treatment modality (40.1 %) followed by breast cancer (17.0 %). BT was rarely the primary investigated treatment modality for cervical cancer (6.8 %). Most clinical trials using BT are predominantly in early phases, investigator-initiated and with low accrual numbers. Current investigational activities that include BT mainly focus on prostate and breast cancers. Important questions concerning the optimal usage of BT will not be answered in the near future. The online version of this article (doi:10.1186/s13014-016-0624-8) contains supplementary material, which is available to authorized users

  3. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  4. Computer Aided Design for Soil Classification Relational Database ...

    African Journals Online (AJOL)

    unique firstlady

    engineering, several developers were asked what rules they applied to identify ... classification is actually a part of all good science. As Michalski ... by a large number of soil scientists. .... and use. The calculus relational database processing is.

  5. Clinical validity of a population database definition of remission in patients with major depression.

    Science.gov (United States)

    Sicras-Mainar, Antoni; Blanca-Tamayo, Milagrosa; Gutiérrez-Nicuesa, Laura; Salvatella-Pasant, Jordi; Navarro-Artieda, Ruth

    2010-02-11

    Major depression (MD) is one of the most frequent diagnoses in Primary Care. It is a disabling illness that increases the use of health resources. To describe the concordance between remission according to clinical assessment and remission obtained from the computerized prescription databases of patients with MD in a Spanish population. multicenter cross-sectional. The population under study was comprised of people from six primary care facilities, who had a MD episode between January 2003 and March 2007. A specialist in psychiatry assessed a random sample of patient histories and determined whether a certain patient was in remission according to clinical criteria (ICPC-2). Regarding the databases, patients were considered in remission when they did not need further prescriptions of AD for at least 6 months after completing treatment for a new episode. Validity indicators (sensitivity [S], specificity [Sp]) and clinical utility (positive and negative probability ratio [PPR] and [NPR]) were calculated. The concordance index was established using Cohen's kappa coefficient. Significance level was p Reliability analysis: Cronbach's alpha: 90.6% (CI was 95%: 85.6 - 95.6%). Results show an acceptable level of concordance between remission obtained from the computerized databases and clinical criteria. The major discrepancies were found in diagnostic accuracy.

  6. Pharmacology Portal: An Open Database for Clinical Pharmacologic Laboratory Services.

    Science.gov (United States)

    Karlsen Bjånes, Tormod; Mjåset Hjertø, Espen; Lønne, Lars; Aronsen, Lena; Andsnes Berg, Jon; Bergan, Stein; Otto Berg-Hansen, Grim; Bernard, Jean-Paul; Larsen Burns, Margrete; Toralf Fosen, Jan; Frost, Joachim; Hilberg, Thor; Krabseth, Hege-Merete; Kvan, Elena; Narum, Sigrid; Austgulen Westin, Andreas

    2016-01-01

    More than 50 Norwegian public and private laboratories provide one or more analyses for therapeutic drug monitoring or testing for drugs of abuse. Practices differ among laboratories, and analytical repertoires can change rapidly as new substances become available for analysis. The Pharmacology Portal was developed to provide an overview of these activities and to standardize the practices and terminology among laboratories. The Pharmacology Portal is a modern dynamic web database comprising all available analyses within therapeutic drug monitoring and testing for drugs of abuse in Norway. Content can be retrieved by using the search engine or by scrolling through substance lists. The core content is a substance registry updated by a national editorial board of experts within the field of clinical pharmacology. This ensures quality and consistency regarding substance terminologies and classification. All laboratories publish their own repertoires in a user-friendly workflow, adding laboratory-specific details to the core information in the substance registry. The user management system ensures that laboratories are restricted from editing content in the database core or in repertoires within other laboratory subpages. The portal is for nonprofit use, and has been fully funded by the Norwegian Medical Association, the Norwegian Society of Clinical Pharmacology, and the 8 largest pharmacologic institutions in Norway. The database server runs an open-source content management system that ensures flexibility with respect to further development projects, including the potential expansion of the Pharmacology Portal to other countries. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  7. 75 FR 4827 - Submission for OMB Review; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI)

    Science.gov (United States)

    2010-01-29

    ... subsequent comment concerning corruption in clinical trials conducted by large pharmaceutical companies. The... Collection: Title: Clinical Trials Reporting Program (CTRP) Database. Type of Information Collection Request... institutions. Type of Respondents: Clinical research administrators on behalf of clinical investigators. The...

  8. An algorithm to transform natural language into SQL queries for relational databases

    Directory of Open Access Journals (Sweden)

    Garima Singh

    2016-09-01

    Full Text Available Intelligent interface, to enhance efficient interactions between user and databases, is the need of the database applications. Databases must be intelligent enough to make the accessibility faster. However, not every user familiar with the Structured Query Language (SQL queries as they may not aware of structure of the database and they thus require to learn SQL. So, non-expert users need a system to interact with relational databases in their natural language such as English. For this, Database Management System (DBMS must have an ability to understand Natural Language (NL. In this research, an intelligent interface is developed using semantic matching technique which translates natural language query to SQL using set of production rules and data dictionary. The data dictionary consists of semantics sets for relations and attributes. A series of steps like lower case conversion, tokenization, speech tagging, database element and SQL element extraction is used to convert Natural Language Query (NLQ to SQL Query. The transformed query is executed and the results are obtained by the user. Intelligent Interface is the need of database applications to enhance efficient interaction between user and DBMS.

  9. METODE RESET PASSWORD LEVEL ROOT PADA RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS MySQL

    Directory of Open Access Journals (Sweden)

    Taqwa Hariguna

    2011-08-01

    Full Text Available Database merupakan sebuah hal yang penting untuk menyimpan data, dengan database organisasi akan mendapatkan keuntungan dalam beberapa hal, seperti kecepatan akases dan mengurangi penggunaan kertas, namun dengan implementasi database tidak jarang administrator database lupa akan password yang digunakan, hal ini akan mempersulit dalam proses penangganan database. Penelitian ini bertujuan untuk menggali cara mereset password level root pada relational database management system MySQL.

  10. Modeling biology using relational databases.

    Science.gov (United States)

    Peitzsch, Robert M

    2003-02-01

    There are several different methodologies that can be used for designing a database schema; no one is the best for all occasions. This unit demonstrates two different techniques for designing relational tables and discusses when each should be used. These two techniques presented are (1) traditional Entity-Relationship (E-R) modeling and (2) a hybrid method that combines aspects of data warehousing and E-R modeling. The method of choice depends on (1) how well the information and all its inherent relationships are understood, (2) what types of questions will be asked, (3) how many different types of data will be included, and (4) how much data exists.

  11. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  12. Vertical partitioning of relational OLTP databases using integer programming

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen

    2010-01-01

    A way to optimize performance of relational row store databases is to reduce the row widths by vertically partition- ing tables into table fractions in order to minimize the number of irrelevant columns/attributes read by each transaction. This pa- per considers vertical partitioning algorithms...... for relational row- store OLTP databases with an H-store-like architecture, meaning that we would like to maximize the number of single-sited transactions. We present a model for the vertical partitioning problem that, given a schema together with a vertical partitioning and a workload, estimates the costs...... applied to the TPC-C benchmark and the heuristic is shown to obtain solutions with costs close to the ones found using the quadratic program....

  13. Renal Gene Expression Database (RGED): a relational database of gene expression profiles in kidney disease

    Science.gov (United States)

    Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo

    2014-01-01

    We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Availability and implementation: Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. Database URL: http://rged.wall-eva.net PMID:25252782

  14. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  15. Evaluation of relational database products for the VAX

    International Nuclear Information System (INIS)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting

  16. Evaluation of relational database products for the VAX

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting.

  17. An Improved Algorithm for Generating Database Transactions from Relational Algebra Specifications

    Directory of Open Access Journals (Sweden)

    Daniel J. Dougherty

    2010-03-01

    Full Text Available Alloy is a lightweight modeling formalism based on relational algebra. In prior work with Fisler, Giannakopoulos, Krishnamurthi, and Yoo, we have presented a tool, Alchemy, that compiles Alloy specifications into implementations that execute against persistent databases. The foundation of Alchemy is an algorithm for rewriting relational algebra formulas into code for database transactions. In this paper we report on recent progress in improving the robustness and efficiency of this transformation.

  18. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    Science.gov (United States)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main

  19. [A relational database to store Poison Centers calls].

    Science.gov (United States)

    Barelli, Alessandro; Biondi, Immacolata; Tafani, Chiara; Pellegrini, Aristide; Soave, Maurizio; Gaspari, Rita; Annetta, Maria Giuseppina

    2006-01-01

    Italian Poison Centers answer to approximately 100,000 calls per year. Potentially, this activity is a huge source of data for toxicovigilance and for syndromic surveillance. During the last decade, surveillance systems for early detection of outbreaks have drawn the attention of public health institutions due to the threat of terrorism and high-profile disease outbreaks. Poisoning surveillance needs the ongoing, systematic collection, analysis, interpretation, and dissemination of harmonised data about poisonings from all Poison Centers for use in public health action to reduce morbidity and mortality and to improve health. The entity-relationship model for a Poison Center relational database is extremely complex and not studied in detail. For this reason, not harmonised data collection happens among Italian Poison Centers. Entities are recognizable concepts, either concrete or abstract, such as patients and poisons, or events which have relevance to the database, such as calls. Connectivity and cardinality of relationships are complex as well. A one-to-many relationship exist between calls and patients: for one instance of entity calls, there are zero, one, or many instances of entity patients. At the same time, a one-to-many relationship exist between patients and poisons: for one instance of entity patients, there are zero, one, or many instances of entity poisons. This paper shows a relational model for a poison center database which allows the harmonised data collection of poison centers calls.

  20. Relational Databases: A Transparent Framework for Encouraging Biology Students to Think Informatically

    Science.gov (United States)

    Rice, Michael; Gladstone, William; Weir, Michael

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…

  1. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    Science.gov (United States)

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  2. CFTR-France, a national relational patient database for sharing genetic and phenotypic data associated with rare CFTR variants.

    Science.gov (United States)

    Claustres, Mireille; Thèze, Corinne; des Georges, Marie; Baux, David; Girodon, Emmanuelle; Bienvenu, Thierry; Audrezet, Marie-Pierre; Dugueperoux, Ingrid; Férec, Claude; Lalau, Guy; Pagin, Adrien; Kitzis, Alain; Thoreau, Vincent; Gaston, Véronique; Bieth, Eric; Malinge, Marie-Claire; Reboul, Marie-Pierre; Fergelot, Patricia; Lemonnier, Lydie; Mekki, Chadia; Fanen, Pascale; Bergougnoux, Anne; Sasorith, Souphatta; Raynal, Caroline; Bareil, Corinne

    2017-10-01

    Most of the 2,000 variants identified in the CFTR (cystic fibrosis transmembrane regulator) gene are rare or private. Their interpretation is hampered by the lack of available data and resources, making patient care and genetic counseling challenging. We developed a patient-based database dedicated to the annotations of rare CFTR variants in the context of their cis- and trans-allelic combinations. Based on almost 30 years of experience of CFTR testing, CFTR-France (https://cftr.iurc.montp.inserm.fr/cftr) currently compiles 16,819 variant records from 4,615 individuals with cystic fibrosis (CF) or CFTR-RD (related disorders), fetuses with ultrasound bowel anomalies, newborns awaiting clinical diagnosis, and asymptomatic compound heterozygotes. For each of the 736 different variants reported in the database, patient characteristics and genetic information (other variations in cis or in trans) have been thoroughly checked by a dedicated curator. Combining updated clinical, epidemiological, in silico, or in vitro functional data helps to the interpretation of unclassified and the reassessment of misclassified variants. This comprehensive CFTR database is now an invaluable tool for diagnostic laboratories gathering information on rare variants, especially in the context of genetic counseling, prenatal and preimplantation genetic diagnosis. CFTR-France is thus highly complementary to the international database CFTR2 focused so far on the most common CF-causing alleles. © 2017 Wiley Periodicals, Inc.

  3. Collaborative research between academia and industry using a large clinical trial database: a case study in Alzheimer's disease

    DEFF Research Database (Denmark)

    Jones, Roy; Wilkinson, David; Lopez, Oscar L

    2011-01-01

    Large clinical trials databases, developed over the course of a comprehensive clinical trial programme, represent an invaluable resource for clinical researchers. Data mining projects sponsored by industry that use these databases, however, are often not viewed favourably in the academic medical...... community because of concerns that commercial, rather than scientific, goals are the primary purpose of such endeavours. Thus, there are few examples of sustained collaboration between leading academic clinical researchers and industry professionals in a large-scale data mining project. We present here...

  4. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  5. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  6. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  7. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  8. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  9. Handling data redundancy and update anomalies in fuzzy relational databases

    International Nuclear Information System (INIS)

    Chen, G.; Kerre, E.E.

    1996-01-01

    This paper discusses various data redundancy and update anomaly problems that may occur with fuzzy relational databases. In coping with these problems to avoid undesirable consequences when fuzzy databases are updated via data insertion, deletion and modification, a number of fuzzy normal forms (e.g., F1NF, 0-F2NF, 0-F3NF, 0-FBCNF) are used to guide the design of relation schemes such that partial and transitive fuzzy functional dependencies (FFDs) between relation attributes are restricted. Based upon FFDs and related concepts, particular attention is paid to 0-F3NF and 0-FBCNF, and to the corresponding decomposition algorithms. These algorithms not only produce relation schemes which are either in 0-F3NF or in 0-FBCNF, but also guarantee that the information (data content and FFDs) with original schemes can be recovered with those resultant schemes

  10. Socioeconomic inequalities in prognostic markers of non-Hodgkin lymphoma: analysis of a national clinical database

    DEFF Research Database (Denmark)

    Frederiksen, Birgitte Lidegaard; Brown, Peter de Nully; Dalton, Susanne Oksbjerg

    2011-01-01

    in histological subgroups reflecting aggressiveness of disease among the social groups. One of the most likely mechanisms of the social difference is longer delay in those with low socioeconomic position. The findings of social inequality in prognostic markers in non-Hodgkin lymphoma (NHL) patients could already......The survival of non-Hodgkin lymphoma patients strongly depends on a range of prognostic factors. This registry-based clinical cohort study investigates the relation between socioeconomic position and prognostic markers in 6234 persons included in a national clinical database in 2000-2008, Denmark....... Several measures of individual socioeconomic position were achieved from Statistics Denmark. The risk of being diagnosed with advanced disease, as expressed by the six prognostic markers (Ann Arbor stage III or IV, more than one extranodal lesion, elevated serum lactate dehydrogenase (LDH), performance...

  11. Development and Feasibility Testing of a Critical Care EEG Monitoring Database for Standardized Clinical Reporting and Multicenter Collaborative Research.

    Science.gov (United States)

    Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D

    2016-04-01

    The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.

  12. ZeBase: an open-source relational database for zebrafish laboratories.

    Science.gov (United States)

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  13. A searching and reporting system for relational databases using a graph-based metadata representation.

    Science.gov (United States)

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  14. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  15. A database for extract solutions in general relativity

    International Nuclear Information System (INIS)

    Horvath, I.; Horvath, Zs.; Lukacs, B.

    1993-07-01

    The field of equations of General Relativity are coupled second order partial differential equations. Therefore no general method is known to generate solutions for prescribed initial and boundary conditions. In addition, the meaning of the particular coordinates cannot be known until the metric is not found. Therefore the result must permit arbitrary coordinate transformations, i.e. most kinds of approximating methods are improper. So exact solutions are necessary and each one is an individual product. For storage, retrieval and comparison database handling techniques are needed. A database of 1359 articles is shown (cross-referred at least once) published in 156 more important journals. It can be handled by dBase III plus on IBM PC's. (author) 5 refs.; 5 tabs

  16. Fostering new relational experience: clinical process in couple psychotherapy.

    Science.gov (United States)

    Marmarosh, Cheri L

    2014-03-01

    One of the most critical goals for couple psychotherapy is to foster a new relational experience in the session where the couple feels safe enough to reveal more vulnerable emotions and to explore their defensive withdrawal, aggressive attacking, or blaming. The lived intimate experience in the session offers the couple an opportunity to gain integrative insight into their feelings, expectations, and behaviors that ultimately hinder intimacy. The clinical processes that are necessary include empathizing with the couple and facilitating safety within the session, looking for opportunities to explore emotions, ruptures, and unconscious motivations that maintain distance in the relationship, and creating a new relational experience in the session that has the potential to engender integrative insight. These clinical processes will be presented with empirical support. Experts from a session will be used to highlight how these processes influence the couple and promote increased intimacy. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. NESSY, a relational PC database for nuclear structure and decay data

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.; Trukhanov, S.K.

    1994-11-01

    The universal relational database NESSY (New ENSDF Search SYstem) based on the international ENSDF system (Evaluated Nuclear Structure Data File) is described. NESSY, which was developed for IBM compatible PC, provides high efficiency processing of ENSDF information for searches and retrievals of nuclear physics data. The principle of the database development and examples of applications are presented. (author)

  18. A Relational Database of WHO Mortality Data Prepared to Facilitate Global Mortality Research

    Directory of Open Access Journals (Sweden)

    Albert de Roos

    2015-09-01

    Full Text Available Detailed world mortality data such as collected by the World Health Organization gives a wealth of information about causes of death worldwide over a time span of 60 year. However, the raw mortality data in text format as provided by the WHO is not directly suitable for systematic research and data mining. In this Data Paper, a relational database is presented that is created from the raw WHO mortality data set and includes mortality rates, an ICD-code table and country reference data. This enriched database, as a corpus of global mortality data, can be readily imported in relational databases but can also function as the data source for other types of databases. The use of this database can therefore greatly facilitate global epidemiological research that may provide new clues to genetic or environmental factors in the origins of diseases.

  19. Integration of a clinical trial database with a PACS

    International Nuclear Information System (INIS)

    Van Herk, M

    2014-01-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  20. ARCTOS: a relational database relating specimens, specimen-based science, and archival documentation

    Science.gov (United States)

    Jarrell, Gordon H.; Ramotnik, Cindy A.; McDonald, D.L.

    2010-01-01

    Data are preserved when they are perpetually discoverable, but even in the Information Age, discovery of legacy data appropriate to particular investigations is uncertain. Secure Internet storage is necessary but insufficient. Data can be discovered only when they are adequately described, and visibility increases markedly if the data are related to other data that are receiving usage. Such relationships can be built within (1) the framework of a relational database, or (1) they can be built among separate resources, within the framework of the Internet. Evolving primarily around biological collections, Arctos is a database that does both of these tasks. It includes data structures for a diversity of specimen attributes, essentially all collection-management tasks, plus literature citations, project descriptions, etc. As a centralized collaboration of several university museums, Arctos is an ideal environment for capitalizing on the many relationships that often exist between items in separate collections. Arctos is related to NIH’s DNA-sequence repository (GenBank) with record-to-record reciprocal linkages, and it serves data to several discipline-specific web portals, including the Global Biodiversity Information Network (GBIF). The University of Alaska Museum’s paleontological collection is Arctos’s recent extension beyond the constraints of neontology. With about 1.3 million cataloged items, additional collections are being added each year.

  1. Scientific Meetings Database: A New Tool for CTBT-Related International Cooperation

    Energy Technology Data Exchange (ETDEWEB)

    Knapik, Jerzy F.; Girven, Mary L.

    1999-08-20

    The mission of international cooperation is defined in the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Ways and means of implementation were the subject of discussion during the International Cooperation Workshop held in Vienna in November 1998, and during the Regional Workshop for CTBTO International Cooperation held in Cairo, Egypt in June 1999. In particular, a database of ''Scientific and Technical Meetings Directly or Indirectly Related to CTBT Verification-Related Technologies'' was developed by the CTBTO PrepCom/PTS/International Cooperation section and integrated into the organization's various web sites in cooperation with the U.S. Department of Energy CTBT Research and Development Program. This database, the structure and use of which is described in this paper/presentation is meant to assist the CTBT-related scientific community in identifying worldwide expertise in the CTBT verification-related technologies and should help experts, particularly those of less technologically advanced States Signatories, to strengthen contacts and to pursue international cooperation under the Tredy regime. Specific opportunities for international cooperation, in particular those provided by active participation in the use and further development of this database, are presented in this paper and/or presentation.

  2. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    Science.gov (United States)

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  3. The KTOI Ecosystem Project Relational Database : a Report Prepared by Statistical Consulting Services for KTOI Describing the Key Components and Specifications of the KTOI Relational Database.

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Bahman [Statistical Consulting Services

    2009-09-24

    Data are the central focus of any research project. Their collection and analysis are crucial to meeting project goals, testing scientific hypotheses, and drawing relevant conclusions. Typical research projects often devote the majority of their resources to the collection, storage and analysis of data. Therefore, issues related to data quality should be of foremost concern. Data quality issues are even more important when conducting multifaceted studies involving several teams of researchers. Without the use of a standardized protocol, for example, independent data collection carried out by separate research efforts can lead to inconsistencies, confusion and errors throughout the larger project. A database management system can be utilized to help avoid all of the aforementioned problems. The centralization of data into a common relational unit, i.e. a relational database, shifts the responsibility for data quality and maintenance from multiple individuals to a single database manager, thus allowing data quality issues to be assessed and corrected in a timely manner. The database system also provides an easy mechanism for standardizing data components, such as variable names and values uniformly across all segments of a project. This is particularly an important issue when data are collected on a number of biological/physical response and explanatory variables from various locations and times. The database system can integrate all segments of a large study into one unit, while providing oversight and accessibility to the data collection process. The quality of all data collected is uniformly maintained and compatibility between research efforts ensured. While the physical database would exist in a central location, access will not be physically limited. Advanced database interfaces are created to operate over the internet utilizing a Web-based relational database, allowing project members to access their data from virtually anywhere. These interfaces provide users

  4. Monitoring of services with non-relational databases and map-reduce framework

    International Nuclear Information System (INIS)

    Babik, M; Souto, F

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  5. Clinical validity of a population database definition of remission in patients with major depression

    Directory of Open Access Journals (Sweden)

    Salvatella-Pasant Jordi

    2010-02-01

    Full Text Available Abstract Background Major depression (MD is one of the most frequent diagnoses in Primary Care. It is a disabling illness that increases the use of health resources. Aim: To describe the concordance between remission according to clinical assessment and remission obtained from the computerized prescription databases of patients with MD in a Spanish population. Methods Design: multicenter cross-sectional. The population under study was comprised of people from six primary care facilities, who had a MD episode between January 2003 and March 2007. A specialist in psychiatry assessed a random sample of patient histories and determined whether a certain patient was in remission according to clinical criteria (ICPC-2. Regarding the databases, patients were considered in remission when they did not need further prescriptions of AD for at least 6 months after completing treatment for a new episode. Validity indicators (sensitivity [S], specificity [Sp] and clinical utility (positive and negative probability ratio [PPR] and [NPR] were calculated. The concordance index was established using Cohen's kappa coefficient. Significance level was p Results 133 patient histories were reviewed. The kappa coefficient was 82.8% (confidence intervals [CI] were 95%: 73.1 - 92.6, PPR 9.8% and NPR 0.1%. Allocation discrepancies between both criteria were found in 11 patients. S was 92.5% (CI was 95%: 88.0 - 96.9% and Sp was 90.6% (CI was 95%: 85.6 - 95.6%, p Conclusions Results show an acceptable level of concordance between remission obtained from the computerized databases and clinical criteria. The major discrepancies were found in diagnostic accuracy.

  6. HOLLYWOOD: a comparative relational database of alternative splicing.

    Science.gov (United States)

    Holste, Dirk; Huo, George; Tung, Vivian; Burge, Christopher B

    2006-01-01

    RNA splicing is an essential step in gene expression, and is often variable, giving rise to multiple alternatively spliced mRNA and protein isoforms from a single gene locus. The design of effective databases to support experimental and computational investigations of alternative splicing (AS) is a significant challenge. In an effort to integrate accurate exon and splice site annotation with current knowledge about splicing regulatory elements and predicted AS events, and to link information about the splicing of orthologous genes in different species, we have developed the Hollywood system. This database was built upon genomic annotation of splicing patterns of known genes derived from spliced alignment of complementary DNAs (cDNAs) and expressed sequence tags, and links features such as splice site sequence and strength, exonic splicing enhancers and silencers, conserved and non-conserved patterns of splicing, and cDNA library information for inferred alternative exons. Hollywood was implemented as a relational database and currently contains comprehensive information for human and mouse. It is accompanied by a web query tool that allows searches for sets of exons with specific splicing characteristics or splicing regulatory element composition, or gives a graphical or sequence-level summary of splicing patterns for a specific gene. A streamlined graphical representation of gene splicing patterns is provided, and these patterns can alternatively be layered onto existing information in the UCSC Genome Browser. The database is accessible at http://hollywood.mit.edu.

  7. BADERI: an online database to coordinate handsearching activities of controlled clinical trials for their potential inclusion in systematic reviews.

    Science.gov (United States)

    Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier

    2017-06-13

    Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching

  8. Clinical and technical characteristics of intraoperative radiotherapy. Analysis of the ISIORT-Europe database

    International Nuclear Information System (INIS)

    Krengli, M.; Sedlmayer, F.

    2013-01-01

    Background: A joint analysis of clinical data from centres within the European section of the International Society of Intraoperative Radiation Therapy (ISIORT-Europe) was undertaken in order to define the range of intraoperative radiotherapy (IORT) techniques and indications encompassed by its member institutions. Materials and methods: In 2007, the ISIORT-Europe centres were invited to record demographic, clinical and technical data relating to their IORT procedures in a joint online database. Retrospective data entry was possible. Results: The survey encompassed 21 centres and data from 3754 IORT procedures performed between 1992 and 2011. The average annual number of patients treated per institution was 42, with three centres treating more than 100 patients per year. The most frequent tumour was breast cancer with 2395 cases (63.8 %), followed by rectal cancer (598 cases, 15.9 %), sarcoma (221 cases, 5.9 %), prostate cancer (108 cases, 2.9 %) and pancreatic cancer (80 cases, 2.1 %). Clinical details and IORT technical data from these five tumour types are reported. Conclusion: This is the first report on a large cohort of patients treated with IORT in Europe. It gives a picture of patient selection methods and treatment modalities, with emphasis on the main tumour types that are typically treated by this technique and may benefit from it. (orig.)

  9. Clinically relevant potential drug-drug interactions among outpatients: A nationwide database study.

    Science.gov (United States)

    Jazbar, Janja; Locatelli, Igor; Horvat, Nejc; Kos, Mitja

    2018-06-01

    Adverse drug events due to drug-drug interactions (DDIs) represent a considerable public health burden, also in Slovenia. A better understanding of the most frequently occurring potential DDIs may enable safer pharmacotherapy and minimize drug-related problems. The aim of this study was to evaluate the prevalence and predictors of potential DDIs among outpatients in Slovenia. An analysis of potential DDIs was performed using health claims data on prescription drugs from a nationwide database. The Lexi-Interact Module was used as the reference source of interactions. The influence of patient-specific predictors on the risk of potential clinically relevant DDIs was evaluated using logistic regression model. The study population included 1,179,803 outpatients who received 15,811,979 prescriptions. The total number of potential DDI cases identified was 3,974,994, of which 15.6% were potentially clinically relevant. Altogether, 9.3% (N = 191,213) of the total population in Slovenia is exposed to clinically relevant potential DDIs, and the proportion is higher among women and the elderly. After adjustment for cofactors, higher number of medications and older age are associated with higher odds of clinically relevant potential DDIs. The burden of DDIs is highest with drug combinations that increase risk of bleeding, enhance CNS depression or anticholinergic effects or cause cardiovascular complications. The current study revealed that 1 in 10 individuals in the total Slovenian population is exposed to clinically relevant potential DDIs yearly. Taking into account the literature based conservative estimate that approximately 1% of potential DDIs result in negative health outcomes, roughly 1800 individuals in Slovenia experience an adverse health outcome each year as a result of clinically relevant potential interactions alone. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Project for a relational database for a radiotherapy service

    International Nuclear Information System (INIS)

    Esposito, R. D.; Planes Meseguer, D.; Dorado Rodriguez, M. P.

    2011-01-01

    The aim of this work is to extract useful data easily to improve our working protocols and to evaluate quantitatively the results of the treatments. To do this you are implementing a database (DB) relational practice that allows the use of this information stored.

  11. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    Science.gov (United States)

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  12. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  13. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    Science.gov (United States)

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  14. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    Science.gov (United States)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  15. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    Science.gov (United States)

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  16. UbSRD: The Ubiquitin Structural Relational Database.

    Science.gov (United States)

    Harrison, Joseph S; Jacobs, Tim M; Houlihan, Kevin; Van Doorslaer, Koenraad; Kuhlman, Brian

    2016-02-22

    The structurally defined ubiquitin-like homology fold (UBL) can engage in several unique protein-protein interactions and many of these complexes have been characterized with high-resolution techniques. Using Rosetta's structural classification tools, we have created the Ubiquitin Structural Relational Database (UbSRD), an SQL database of features for all 509 UBL-containing structures in the PDB, allowing users to browse these structures by protein-protein interaction and providing a platform for quantitative analysis of structural features. We used UbSRD to define the recognition features of ubiquitin (UBQ) and SUMO observed in the PDB and the orientation of the UBQ tail while interacting with certain types of proteins. While some of the interaction surfaces on UBQ and SUMO overlap, each molecule has distinct features that aid in molecular discrimination. Additionally, we find that the UBQ tail is malleable and can adopt a variety of conformations upon binding. UbSRD is accessible as an online resource at rosettadesign.med.unc.edu/ubsrd. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Relative aggregation operator in database fuzzy querying

    Directory of Open Access Journals (Sweden)

    Luminita DUMITRIU

    2005-12-01

    Full Text Available Fuzzy selection criteria querying relational databases include vague terms; they usually refer linguistic values form the attribute linguistic domains, defined as fuzzy sets. Generally, when a vague query is processed, the definitions of vague terms must already exist in a knowledge base. But there are also cases when vague terms must be dynamically defined, when a particular operation is used to aggregate simple criteria in a complex selection. The paper presents a new aggregation operator and the corresponding algorithm to evaluate the fuzzy query.

  18. Benefits of a relational database for computerized management

    International Nuclear Information System (INIS)

    Shepherd, W.W.

    1991-01-01

    This paper reports on a computerized relational database which is the basis for a hazardous materials information management system which is comprehensive, effective, flexible and efficient. The system includes product information for Material Safety Data Sheets (MSDSs), labels, shipping, and the environment and is used in Dowell Schlumberger (DS) operations worldwide for a number of programs including planning, training, emergency response and regulatory compliance

  19. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-03-19

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.

  20. Executing Complexity-Increasing Queries in Relational (MySQL) and NoSQL (MongoDB and EXist) Size-Growing ISO/EN 13606 Standardized EHR Databases

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2018-01-01

    This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174

  1. Prolog as a Teaching Tool for Relational Database Interrogation.

    Science.gov (United States)

    Collier, P. A.; Samson, W. B.

    1982-01-01

    The use of the Prolog programing language is promoted as the language to use by anyone teaching a course in relational databases. A short introduction to Prolog is followed by a series of examples of queries. Several references are noted for anyone wishing to gain a deeper understanding. (MP)

  2. MAT-DB - A database for nuclear energy related materials data

    International Nuclear Information System (INIS)

    Over, H.H.

    2009-01-01

    The web-enabled materials database (Mat-DB) of JRC-IE has a long-term history in storing materials test data resulting from European and international research projects. The database structure and the user-guidance has bee permanently updated improved and optimized. The database is implemented in the secure ODIN portal: https://odin.jrc.ec.europa.eu of JRC-IE. This architecture guarantees fast access to confidential and public data and documentation which are stored in an inter-related document management database (DoMa). It is a part of JRC's nuclear knowledge management. Mat-DB hosts the whole pool of IAEA surveillance data of reactor pressure vessel materials from different nuclear power plants of the member states. Mat-DB contains also thousands of European GEN IV reactor systems related R and D materials data which are an important basis for the evaluating and extrapolating design data for candidate materials and setting up design rules covering high temperature exposure, irradiation and corrosion. Those data and rules would match also fusion related components. Mat-DB covers thermo-mechanical and thermo-physical properties data of engineering alloys at low, elevated and high temperatures for base materials and joints, including irradiated materials for nuclear fission and fusion applications, thermal barrier coated materials for gas turbines and properties of corroded materials. The corrosion part refers to weight gain/loss data of high temperature exposed engineering alloys and ceramic materials. For each test type the database structure reflects international test standards and recommendations. Mat-DB features an extensive library of evaluation programs for web-enabled assessment of uniaxial creep, fatigue, crack growth and high temperature corrosion properties. Evaluations can be performed after data retrieval or independently of Mat-DB by transferring other materials data in a given format to the programs. The fast evaluation processes help the user to

  3. Use of Software Tools in Teaching Relational Database Design.

    Science.gov (United States)

    McIntyre, D. R.; And Others

    1995-01-01

    Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)

  4. The Use of a Relational Database in Qualitative Research on Educational Computing.

    Science.gov (United States)

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  5. Risk estimates for hip fracture from clinical and densitometric variables and impact of database selection in Lebanese subjects.

    Science.gov (United States)

    Badra, Mohammad; Mehio-Sibai, Abla; Zeki Al-Hazzouri, Adina; Abou Naja, Hala; Baliki, Ghassan; Salamoun, Mariana; Afeiche, Nadim; Baddoura, Omar; Bulos, Suhayl; Haidar, Rachid; Lakkis, Suhayl; Musharrafieh, Ramzi; Nsouli, Afif; Taha, Assaad; Tayim, Ahmad; El-Hajj Fuleihan, Ghada

    2009-01-01

    Bone mineral density (BMD) and fracture incidence vary greatly worldwide. The data, if any, on clinical and densitometric characteristics of patients with hip fractures from the Middle East are scarce. The objective of the study was to define risk estimates from clinical and densitometric variables and the impact of database selection on such estimates. Clinical and densitometric information were obtained in 60 hip fracture patients and 90 controls. Hip fracture subjects were 74 yr (9.4) old, were significantly taller, lighter, and more likely to be taking anxiolytics and sleeping pills than controls. National Health and Nutrition Examination Survey (NHANES) database selection resulted in a higher sensitivity and almost equal specificity in identifying patients with a hip fracture compared with the Lebanese database. The odds ratio (OR) and its confidence interval (CI) for hip fracture per standard deviation (SD) decrease in total hip BMD was 2.1 (1.45-3.05) with the NHANES database, and 2.11 (1.36-2.37) when adjusted for age and body mass index (BMI). Risk estimates were higher in male compared with female subjects. In Lebanese subjects, BMD- and BMI-derived hip fracture risk estimates are comparable to western standards. The study validates the universal use of the NHANES database, and the applicability of BMD- and BMI-derived risk fracture estimates in the World Health Organization (WHO) global fracture risk model, to the Lebanese.

  6. New tools and methods for direct programmatic access to the dbSNP relational database.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  7. The Database of the Catalogue of Clinical Practice Guidelines Published via Internet in the Czech Language -The Current State

    Czech Academy of Sciences Publication Activity Database

    Zvolský, Miroslav

    2010-01-01

    Roč. 6, č. 1 (2010), s. 83-89 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : internet * World Wide Web * database * clinical practice guideline * clinical practice * evidence-based medicine * formalisation * GLIF (Guideline Inerchange Format) * doctor of medicine, * decision support systems Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/en/ejbi/article/63-en-the-database-of-the-catalogue-of-clinical- practice -guidelines-published-via-internet-in-the-czech-language-the-current-state.html

  8. Databases for rRNA gene profiling of microbial communities

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Matthew

    2013-07-02

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  9. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  10. Renal Gene Expression Database (RGED): a relational database of gene expression profiles in kidney disease.

    Science.gov (United States)

    Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo

    2014-01-01

    We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. http://rged.wall-eva.net. © The Author(s) 2014. Published by Oxford University Press.

  11. The representation of manipulable solid objects in a relational database

    Science.gov (United States)

    Bahler, D.

    1984-01-01

    This project is concerned with the interface between database management and solid geometric modeling. The desirability of integrating computer-aided design, manufacture, testing, and management into a coherent system is by now well recognized. One proposed configuration for such a system uses a relational database management system as the central focus; the various other functions are linked through their use of a common data repesentation in the data manager, rather than communicating pairwise to integrate a geometric modeling capability with a generic relational data managemet system in such a way that well-formed questions can be posed and answered about the performance of the system as a whole. One necessary feature of any such system is simplification for purposes of anaysis; this and system performance considerations meant that a paramount goal therefore was that of unity and simplicity of the data structures used.

  12. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  13. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    Science.gov (United States)

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  14. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  15. Learning Ontology from Object-Relational Database

    Directory of Open Access Journals (Sweden)

    Kaulins Andrejs

    2015-12-01

    Full Text Available This article describes a method of transformation of object-relational model into ontology. The offered method uses learning rules for such complex data types as object tables and collections – arrays of a variable size, as well as nested tables. Object types and their transformation into ontologies are insufficiently considered in scientific literature. This fact served as motivation for the authors to investigate this issue and to write the article on this matter. In the beginning, we acquaint the reader with complex data types and object-oriented databases. Then we describe an algorithm of transformation of complex data types into ontologies. At the end of the article, some examples of ontologies described in the OWL language are given.

  16. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  17. Current nonclinical testing paradigm enables safe entry to First-In-Human clinical trials: The IQ consortium nonclinical to clinical translational database.

    Science.gov (United States)

    Monticello, Thomas M; Jones, Thomas W; Dambach, Donna M; Potter, David M; Bolt, Michael W; Liu, Maggie; Keller, Douglas A; Hart, Timothy K; Kadambi, Vivek J

    2017-11-01

    The contribution of animal testing in drug development has been widely debated and challenged. An industry-wide nonclinical to clinical translational database was created to determine how safety assessments in animal models translate to First-In-Human clinical risk. The blinded database was composed of 182 molecules and contained animal toxicology data coupled with clinical observations from phase I human studies. Animal and clinical data were categorized by organ system and correlations determined. The 2×2 contingency table (true positive, false positive, true negative, false negative) was used for statistical analysis. Sensitivity was 48% with a 43% positive predictive value (PPV). The nonhuman primate had the strongest performance in predicting adverse effects, especially for gastrointestinal and nervous system categories. When the same target organ was identified in both the rodent and nonrodent, the PPV increased. Specificity was 84% with an 86% negative predictive value (NPV). The beagle dog had the strongest performance in predicting an absence of clinical adverse effects. If no target organ toxicity was observed in either test species, the NPV increased. While nonclinical studies can demonstrate great value in the PPV for certain species and organ categories, the NPV was the stronger predictive performance measure across test species and target organs indicating that an absence of toxicity in animal studies strongly predicts a similar outcome in the clinic. These results support the current regulatory paradigm of animal testing in supporting safe entry to clinical trials and provide context for emerging alternate models. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Electroconvulsive therapy clinical database: Influence of age and gender on the electrical charge.

    Science.gov (United States)

    Salvador Sánchez, Javier; David, Mónica Delia; Torrent Setó, Aurora; Martínez Alonso, Montserrat; Portella Moll, Maria J; Pifarré Paredero, Josep; Vieta Pascual, Eduard; Mur Laín, María

    The influence of age and gender in the electrical charge delivered in a given population was analysed using an electroconvulsive therapy (ECT) clinical database. An observational, prospective, longitudinal study with descriptive analysis was performed using data from a database that included total bilateral frontotemporal ECT carried out with a Mecta spECTrum 5000Q ® in our hospital over 6 years. From 2006 to 2012, a total of 4,337 ECT were performed on 187 patients. Linear regression using mixed effects analysis was weighted by the inverse of the number of ECT performed on each patient per year of treatment. The results indicate that age is related with changes in the required charge (P=.031), as such that the older the age a higher charge is needed. Gender is also associated with changes in charge (P=.014), with women requiring less charge than men, a mean of 87.3mC less. When the effects of age and gender are included in the same model, both are significant (P=.0080 and P=.0041). Thus, for the same age, women require 99.0mC less charge than men, and in both genders the charge increases by 2.3mC per year. From our study, it is concluded that the effect of age on the dosage of the electrical charge is even more significant when related to gender. It would be of interest to promote the systematic collection of data for a better understanding and application of the technique. Copyright © 2015 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. PSSRdb: a relational database of polymorphic simple sequence repeats extracted from prokaryotic genomes.

    Science.gov (United States)

    Kumar, Pankaj; Chaitanya, Pasumarthy S; Nagarajaram, Hampapathalu A

    2011-01-01

    PSSRdb (Polymorphic Simple Sequence Repeats database) (http://www.cdfd.org.in/PSSRdb/) is a relational database of polymorphic simple sequence repeats (PSSRs) extracted from 85 different species of prokaryotes. Simple sequence repeats (SSRs) are the tandem repeats of nucleotide motifs of the sizes 1-6 bp and are highly polymorphic. SSR mutations in and around coding regions affect transcription and translation of genes. Such changes underpin phase variations and antigenic variations seen in some bacteria. Although SSR-mediated phase variation and antigenic variations have been well-studied in some bacteria there seems a lot of other species of prokaryotes yet to be investigated for SSR mediated adaptive and other evolutionary advantages. As a part of our on-going studies on SSR polymorphism in prokaryotes we compared the genome sequences of various strains and isolates available for 85 different species of prokaryotes and extracted a number of SSRs showing length variations and created a relational database called PSSRdb. This database gives useful information such as location of PSSRs in genomes, length variation across genomes, the regions harboring PSSRs, etc. The information provided in this database is very useful for further research and analysis of SSRs in prokaryotes.

  20. Comparative performance measures of relational and object-oriented databases using High Energy Physics data

    International Nuclear Information System (INIS)

    Marstaller, J.

    1993-12-01

    The major experiments at the SSC are expected to produce up to 1 Petabyte of data per year. The use of database techniques can significantly reduce the time it takes to access data. The goal of this project was to test which underlying data model, the relational or the object-oriented, would be better suited for archival and accessing high energy data. We describe the relational and the object-oriented data model and their implementation in commercial database management systems. To determine scalability we tested both implementations for 10-MB and 100-MB databases using storage and timing criteria

  1. Clinical and mutational characteristics of Duchenne muscular dystrophy patients based on a comprehensive database in South China.

    Science.gov (United States)

    Wang, Dan-Ni; Wang, Zhi-Qiang; Yan, Lei; He, Jin; Lin, Min-Ting; Chen, Wan-Jin; Wang, Ning

    2017-08-01

    The development of clinical trials for Duchenne muscular dystrophy (DMD) in China faces many challenges due to limited information about epidemiological data, natural history and clinical management. To provide these detailed data, we developed a comprehensive database based on registered DMD patients from South China and analysed their clinical and mutational characteristics. The database included DMD registrants confirmed by clinical presentation, family history, genetic detection, prognostic outcome, and/or muscle biopsy. Clinical data were collected by a registry form. Mutations of dystrophin were detected by multiplex ligation-dependent probe amplification (MLPA) and Sanger sequencing. Currently, 132 DMD patients from 128 families in South China have been registered, and 91.7% of them were below 10 years old. In mutational detection, large deletions were the most frequent type (57.8%), followed by small deletion/insertion mutations (14.1%), nonsense mutations (13.3%), large duplications (10.9%), and splice site mutations (3.1%). Clinical analysis revealed that most patients reported initial symptoms between 1 and 3 years of age, but the diagnostic age was more frequently between 6 and 8 years. 81.4% of patients were ambulatory. Baseline cardiac assessments at diagnosis were conducted in 39.4% and 29.5% of patients by echocardiograms and electrocardiograms, respectively. Only 22.7% of registrants performed baseline respiratory assessments. A small numbers of patients (20.5%) were treated with glucocorticoids. 13.3% of patients were eligible for stop codon read-through therapy, and 48.4% of patients would potentially benefit from exon skipping. The top five exon skips applicable to the largest group of registrants were skipping of exons 51 (14.8% of total mutations), 53 (12.5%), 45 (7.0%), 55 (4.7%), and 44 (3.9%). In conclusion, our database provided information on the natural history, diagnosis and management status of DMD in South China, as well as potential

  2. Clinical Practice Guidelines for Rare Diseases: The Orphanet Database.

    Directory of Open Access Journals (Sweden)

    Sonia Pavan

    Full Text Available Clinical practice guidelines (CPGs for rare diseases (RDs are scarce, may be difficult to identify through Internet searches and may vary in quality depending on the source and methodology used. In order to contribute to the improvement of the diagnosis, treatment and care of patients, Orphanet (www.orpha.net has set up a procedure for the selection, quality evaluation and dissemination of CPGs, with the aim to provide easy access to relevant, accurate and specific recommendations for the management of RDs. This article provides an analysis of selected CPGs by medical domain coverage, prevalence of diseases, languages and type of producer, and addresses the variability in CPG quality and availability. CPGs are identified via bibliographic databases, websites of research networks, expert centres or medical societies. They are assessed according to quality criteria derived from the Appraisal of Guidelines, REsearch and Evaluation (AGREE II Instrument. Only open access CPGs and documents for which permission from the copyright holders has been obtained are disseminated on the Orphanet website. From January 2012 to July 2015, 277 CPGs were disseminated, representing coverage of 1,122 groups of diseases, diseases or subtypes in the Orphanet database. No language restriction is applied, and so far 10 languages are represented, with a predominance of CPGs in English, French and German (92% of all CPGs. A large proportion of diseases with identified CPGs belong to rare oncologic, neurologic, hematologic diseases or developmental anomalies. The Orphanet project on CPG collection, evaluation and dissemination is a continuous process, with regular addition of new guidelines, and updates. CPGs meeting the quality criteria are integrated to the Orphanet database of rare diseases, together with other types of textual information and the appropriate services for patients, researchers and healthcare professionals in 40 countries.

  3. A Quantitative Analysis of the Extrinsic and Intrinsic Turnover Factors of Relational Database Support Professionals

    Science.gov (United States)

    Takusi, Gabriel Samuto

    2010-01-01

    This quantitative analysis explored the intrinsic and extrinsic turnover factors of relational database support specialists. Two hundred and nine relational database support specialists were surveyed for this research. The research was conducted based on Hackman and Oldham's (1980) Job Diagnostic Survey. Regression analysis and a univariate ANOVA…

  4. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    Science.gov (United States)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  5. A Space-Economic Representation of Transitive Closures in Relational Databases

    Directory of Open Access Journals (Sweden)

    Yangjun Chen

    2006-10-01

    Full Text Available A composite object represented as a directed graph (digraph for short is an important data structure that requires efficient support in CAD/CAM, CASE, office systems, software management, web databases, and document databases. It is cumbersome to handle such objects in relational database systems when they involve ancestor-descendant relationships (or say, recursive relationships. In this paper, we present a new encoding method to label a digraph, which reduces the footprints of all previous strategies. This method is based on a tree labeling method and the concept of branchings that are used in graph theory for finding the shortest connection networks. A branching is a subgraph of a given digraph that is in fact a forest, but covers all the nodes of the graph. On the one hand, the proposed encoding scheme achieves the smallest space requirements among all previously published strategies for recognizing recursive relationships. On the other hand, it leads to a new algorithm for computing transitive closures for DAGs (directed acyclic graph in O(eþb time and O(nþb space, where n represents the number of the nodes of a DAG, e the numbers of the edges, and b the DAG's breadth. In addition, this method can be extended to cyclic digraphs and is especially suitable for a relational environment.

  6. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  7. Mining approximate temporal functional dependencies with pure temporal grouping in clinical databases.

    Science.gov (United States)

    Combi, Carlo; Mantovani, Matteo; Sabaini, Alberto; Sala, Pietro; Amaddeo, Francesco; Moretti, Ugo; Pozzi, Giuseppe

    2015-07-01

    Functional dependencies (FDs) typically represent associations over facts stored by a database, such as "patients with the same symptom get the same therapy." In more recent years, some extensions have been introduced to represent both temporal constraints (temporal functional dependencies - TFDs), as "for any given month, patients with the same symptom must have the same therapy, but their therapy may change from one month to the next one," and approximate properties (approximate functional dependencies - AFDs), as "patients with the same symptomgenerallyhave the same therapy." An AFD holds most of the facts stored by the database, enabling some data to deviate from the defined property: the percentage of data which violate the given property is user-defined. According to this scenario, in this paper we introduce approximate temporal functional dependencies (ATFDs) and use them to mine clinical data. Specifically, we considered the need for deriving new knowledge from psychiatric and pharmacovigilance data. ATFDs may be defined and measured either on temporal granules (e.g.grouping data by day, week, month, year) or on sliding windows (e.g.a fixed-length time interval which moves over the time axis): in this regard, we propose and discuss some specific and efficient data mining techniques for ATFDs. We also developed two running prototypes and showed the feasibility of our proposal by mining two real-world clinical data sets. The clinical interest of the dependencies derived considering the psychiatry and pharmacovigilance domains confirms the soundness and the usefulness of the proposed techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Architectural design of a data warehouse to support operational and analytical queries across disparate clinical databases.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam; Wajngurt, David

    2007-10-11

    As the clinical data warehouse of the New York Presbyterian Hospital has evolved innovative methods of integrating new data sources and providing more effective and efficient data reporting and analysis need to be explored. We designed and implemented a new clinical data warehouse architecture to handle the integration of disparate clinical databases in the institution. By examining the way downstream systems are populated and streamlining the way data is stored we create a virtual clinical data warehouse that is adaptable to future needs of the organization.

  9. Critical incidents related to cardiac arrests reported to the Danish Patient Safety Database

    DEFF Research Database (Denmark)

    Andersen, Peter Oluf; Maaløe, Rikke; Andersen, Henning Boje

    2010-01-01

    Background Critical incident reports can identify areas for improvement in resuscitation practice. The Danish Patient Safety Database is a mandatory reporting system and receives critical incident reports submitted by hospital personnel. The aim of this study is to identify, analyse and categorize...... critical incidents related to cardiac arrests reported to the Danish Patient Safety Database. Methods The search terms “cardiac arrest” and “resuscitation” were used to identify reports in the Danish Patient Safety Database. Identified critical incidents were then classified into categories. Results One...

  10. Clinically Unsuspected Prion Disease Among Patients With Dementia Diagnoses in an Alzheimer's Disease Database.

    Science.gov (United States)

    Maddox, Ryan A; Blase, J L; Mercaldo, N D; Harvey, A R; Schonberger, L B; Kukull, W A; Belay, E D

    2015-12-01

    Brain tissue analysis is necessary to confirm prion diseases. Clinically unsuspected cases may be identified through neuropathologic testing. National Alzheimer's Coordinating Center (NACC) Minimum and Neuropathologic Data Set for 1984 to 2005 were reviewed. Eligible patients had dementia, underwent autopsy, had available neuropathologic data, belonged to a currently funded Alzheimer's Disease Center (ADC), and were coded as having an Alzheimer's disease clinical diagnosis or a nonprion disease etiology. For the eligible patients with neuropathology indicating prion disease, further clinical information, collected from the reporting ADC, determined whether prion disease was considered before autopsy. Of 6000 eligible patients in the NACC database, 7 (0.12%) were clinically unsuspected but autopsy-confirmed prion disease cases. The proportion of patients with dementia with clinically unrecognized but autopsy-confirmed prion disease was small. Besides confirming clinically suspected cases, neuropathology is useful to identify unsuspected clinically atypical cases of prion disease. © The Author(s) 2015.

  11. Clinical characteristics and distinctiveness of DSM-5 eating disorder diagnoses: findings from a large naturalistic clinical database

    Science.gov (United States)

    2013-01-01

    Background DSM-IV eating disorder (ED) diagnoses have been criticized for lack of clinical utility, diagnostic instability, and over-inclusiveness of the residual category “ED not otherwise specified” (EDNOS). Revisions made in DSM-5 attempt to generate a more scientifically valid and clinically relevant system of ED classification. The aim with the present study was to examine clinical characteristics and distinctiveness of the new DSM-5 ED diagnoses, especially concerning purging disorder (PD). Methods Using a large naturalistic Swedish ED database, 2233 adult women were diagnosed using DSM-5. Initial and 1-year follow-up psychopathology data were analyzed. Measures included the Eating Disorder Examination Questionnaire, Structural Eating Disorder Interview, Clinical Impairment Assessment, Structural Analysis of Social Behavior, Comprehensive Psychiatric Rating Scale, and Structured Clinical Interview for DSM-IV Axis I Disorders. Results Few meaningful differences emerged between anorexia nervosa binge/purge subtype (ANB/P), PD, and bulimia nervosa (BN). Unspecified Feeding and Eating Disorders (UFED) showed significantly less severity compared to other groups. Conclusions PD does not appear to constitute a distinct diagnosis, the distinction between atypical AN and PD requires clarification, and minimum inclusion criteria for UFED are needed. Further sub-classification is unlikely to improve clinical utility. Instead, better delineation of commonalities is important. PMID:24999410

  12. Modeling Spatial Data within Object Relational-Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-03-01

    Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.

  13. TRENDS: A flight test relational database user's guide and reference manual

    Science.gov (United States)

    Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.

    1994-01-01

    This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.

  14. Fuzzy Relational Databases: Representational Issues and Reduction Using Similarity Measures.

    Science.gov (United States)

    Prade, Henri; Testemale, Claudette

    1987-01-01

    Compares and expands upon two approaches to dealing with fuzzy relational databases. The proposed similarity measure is based on a fuzzy Hausdorff distance and estimates the mismatch between two possibility distributions using a reduction process. The consequences of the reduction process on query evaluation are studied. (Author/EM)

  15. Development of a Normative Database for Multifocal Electroretinography in the Context of a Multicenter Clinical Trial

    DEFF Research Database (Denmark)

    Simão, Sílvia; Costa, Miguel Ângelo; Sun, Jennifer K

    2017-01-01

    or diabetes mellitus; the subjects were recruited from 11 clinical sites in the setting of the EUROCONDOR project. Standardized mfERG acquisition (103 hexagons per eye) was established based on the International Society of Clinical Electrophysiology in Vision. At least one technician per site received both...... specialized training and certification. The main variables that could have influenced the results were considered in the analyses. RESULTS: The normative database was based on 111 eyes. The overall mean P1-implicit time (IT) was 33.94 ± 1.70 ms, and the mean P1 amplitude was 30.58 ± 5.20 nV/deg2. Age...... and gender were independently related to predictors of P1-IT but not of P1 amplitude. The responses that were averaged for the 6 rings showed a longer P1-IT time in the fovea, decreasing progressively to the parafovea and perifovea. By contrast, P1 amplitude values sharply decreased with retinal eccentricity...

  16. Exposure to benzodiazepines (anxiolytics, hypnotics and related drugs) in seven European electronic healthcare databases: a cross-national descriptive study from the PROTECT-EU Project.

    Science.gov (United States)

    Huerta, Consuelo; Abbing-Karahagopian, Victoria; Requena, Gema; Oliva, Belén; Alvarez, Yolanda; Gardarsdottir, Helga; Miret, Montserrat; Schneider, Cornelia; Gil, Miguel; Souverein, Patrick C; De Bruin, Marie L; Slattery, Jim; De Groot, Mark C H; Hesse, Ulrik; Rottenkolber, Marietta; Schmiedl, Sven; Montero, Dolores; Bate, Andrew; Ruigomez, Ana; García-Rodríguez, Luis Alberto; Johansson, Saga; de Vries, Frank; Schlienger, Raymond G; Reynolds, Robert F; Klungel, Olaf H; de Abajo, Francisco José

    2016-03-01

    Studies on drug utilization usually do not allow direct cross-national comparisons because of differences in the respective applied methods. This study aimed to compare time trends in BZDs prescribing by applying a common protocol and analyses plan in seven European electronic healthcare databases. Crude and standardized prevalence rates of drug prescribing from 2001-2009 were calculated in databases from Spain, United Kingdon (UK), The Netherlands, Germany and Denmark. Prevalence was stratified by age, sex, BZD type [(using ATC codes), i.e. BZD-anxiolytics BZD-hypnotics, BZD-related drugs and clomethiazole], indication and number of prescription. Crude prevalence rates of BZDs prescribing ranged from 570 to 1700 per 10,000 person-years over the study period. Standardization by age and sex did not substantially change the differences. Standardized prevalence rates increased in the Spanish (+13%) and UK databases (+2% and +8%) over the study period, while they decreased in the Dutch databases (-4% and -22%), the German (-12%) and Danish (-26%) database. Prevalence of anxiolytics outweighed that of hypnotics in the Spanish, Dutch and Bavarian databases, but the reverse was shown in the UK and Danish databases. Prevalence rates consistently increased with age and were two-fold higher in women than in men in all databases. A median of 18% of users received 10 or more prescriptions in 2008. Although similar methods were applied, the prevalence of BZD prescribing varied considerably across different populations. Clinical factors related to BZDs and characteristics of the databases may explain these differences. Copyright © 2015 John Wiley & Sons, Ltd.

  17. An Internet-ready database for prospective randomized clinical trials of high-dose-rate brachytherapy for adenocarcinoma of the prostate

    International Nuclear Information System (INIS)

    Devlin, Phillip M.; Brus, Christina R.; Kazakin, Julia; Mitchell, Ronald B.; Demanes, D. Jeffrey; Edmundson, Gregory; Gribble, Michael; Gustafson, Gary S.; Kelly, Douglas A.; Linares, Luis A.; Martinez, Alvaro A.; Mate, Timothy P.; Nag, Subir; Perez, Carlos A.; Rao, Jaynath G.; Rodriguez, Rodney R.; Shasha, Daniel; Tripuraneni, Prabhakar

    2002-01-01

    Purpose: To demonstrate a new interactive Internet-ready database for prospective clinical trials in high-dose-rate (HDR) brachytherapy for prostate cancer. Methods and Materials: An Internet-ready database was created that allows common data acquisition and statistical analysis. Patient anonymity and confidentiality are preserved. These data forms include all common elements found from a survey of the databases. The forms allow the user to view patient data in a view-only or edit mode. Eight linked forms document patient data before and after receiving HDR therapy. The pretreatment forms are divided into four categories: staging, comorbid diseases, external beam radiotherapy data, and signs and symptoms. The posttreatment forms separate data by HDR implant information, HDR medications, posttreatment signs and symptoms, and follow-up data. The forms were tested for clinical usefulness. Conclusion: This Internet-based database enables the user to record and later analyze all relevant medical data and may become a reliable instrument for the follow-up of patients and evaluation of treatment results

  18. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  19. Engineering the object-relation database model in O-Raid

    Science.gov (United States)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  20. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  1. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  2. Similarity joins in relational database systems

    CERN Document Server

    Augsten, Nikolaus

    2013-01-01

    State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance comput

  3. A relational database for physical data from TJ-II discharges

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.

    2002-01-01

    A relational database (RDB) has been developed for classifying TJ-II experimental data according to physical criteria. Two objectives have been achieved: the design and the implementation of the database and the software tools for data access depending on a single software driver. TJ-II data were arranged in several tables with a flexible design, speedy performance, efficient search capacity and adaptability to meet present and future, requirements. The software has been developed to allow the access to the TJ-II RDB from a variety of computer platforms (ALPHA AXP/True64 UNIX, CRAY/UNICOS, Intel Linux, Sparc/Solaris and Intel/Windows 95/98/NT) and programming languages (FORTRAN and C/C++). The database resides in a Windows NT Server computer and is managed by Microsoft SQL Server. The access software is based on open network computing remote procedure call and follows client/server model. A server program running in the Windows NT computer controls data access. Operations on the database (through a local ODBC connection) are performed according to predefined permission protocols. A client library providing a set of basic functions for data integration and retrieval has been built in both static and dynamic link versions. The dynamic version is essential in accessing RDB data from 4GL environments (IDL and PV-WAVE among others)

  4. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  5. The clinical database and implementation of treatment guidelines by the Danish Breast Cancer Cooperative Group in 2007-2016

    DEFF Research Database (Denmark)

    Jensen, Maj-Britt; Laenkholm, Anne-Vibeke; Offersen, Birgitte V

    2018-01-01

    BACKGROUND: Since 40 years, Danish Breast Cancer Cooperative Group (DBCG) has provided comprehensive guidelines for diagnosis and treatment of breast cancer. This population-based analysis aimed to describe the plurality of modifications introduced over the past 10 years in the national Danish...... guidelines for the management of early breast cancer. By use of the clinical DBCG database we analyze the effectiveness of the implementation of guideline revisions in Denmark. METHODS: From the DBCG guidelines we extracted modifications introduced in 2007-2016 and selected examples regarding surgery......, radiotherapy (RT) and systemic treatment. We assessed introduction of modifications from release on the DBCG webpage to change in clinical practice using the DBCG clinical database. RESULTS: Over a 10-year period data from 48,772 patients newly diagnosed with malignant breast tumors were entered into DBCG...

  6. cuticleDB: a relational database of Arthropod cuticular proteins

    Directory of Open Access Journals (Sweden)

    Willis Judith H

    2004-09-01

    Full Text Available Abstract Background The insect exoskeleton or cuticle is a bi-partite composite of proteins and chitin that provides protective, skeletal and structural functions. Little information is available about the molecular structure of this important complex that exhibits a helicoidal architecture. Scores of sequences of cuticular proteins have been obtained from direct protein sequencing, from cDNAs, and from genomic analyses. Most of these cuticular protein sequences contain motifs found only in arthropod proteins. Description cuticleDB is a relational database containing all structural proteins of Arthropod cuticle identified to date. Many come from direct sequencing of proteins isolated from cuticle and from sequences from cDNAs that share common features with these authentic cuticular proteins. It also includes proteins from the Drosophila melanogaster and the Anopheles gambiae genomes, that have been predicted to be cuticular proteins, based on a Pfam motif (PF00379 responsible for chitin binding in Arthropod cuticle. The total number of the database entries is 445: 370 derive from insects, 60 from Crustacea and 15 from Chelicerata. The database can be accessed from our web server at http://bioinformatics.biol.uoa.gr/cuticleDB. Conclusions CuticleDB was primarily designed to contain correct and full annotation of cuticular protein data. The database will be of help to future genome annotators. Users will be able to test hypotheses for the existence of known and also of yet unknown motifs in cuticular proteins. An analysis of motifs may contribute to understanding how proteins contribute to the physical properties of cuticle as well as to the precise nature of their interaction with chitin.

  7. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    Science.gov (United States)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  8. The CEBAF Element Database and Related Operational Software

    Energy Technology Data Exchange (ETDEWEB)

    Larrieu, Theodore [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Slominski, Christopher [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Keesee, Marie [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Turner, Dennison [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Joyce, Michele [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)

    2015-09-01

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  9. An integrative clinical database and diagnostics platform for biomarker identification and analysis in ion mobility spectra of human exhaled air

    DEFF Research Database (Denmark)

    Schneider, Till; Hauschild, Anne-Christin; Baumbach, Jörg Ingo

    2013-01-01

    data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous...... biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute-value (EAV) model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access...... to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated...

  10. Dynamic taxonomies applied to a web-based relational database for geo-hydrological risk mitigation

    Science.gov (United States)

    Sacco, G. M.; Nigrelli, G.; Bosio, A.; Chiarle, M.; Luino, F.

    2012-02-01

    In its 40 years of activity, the Research Institute for Geo-hydrological Protection of the Italian National Research Council has amassed a vast and varied collection of historical documentation on landslides, muddy-debris flows, and floods in northern Italy from 1600 to the present. Since 2008, the archive resources have been maintained through a relational database management system. The database is used for routine study and research purposes as well as for providing support during geo-hydrological emergencies, when data need to be quickly and accurately retrieved. Retrieval speed and accuracy are the main objectives of an implementation based on a dynamic taxonomies model. Dynamic taxonomies are a general knowledge management model for configuring complex, heterogeneous information bases that support exploratory searching. At each stage of the process, the user can explore or browse the database in a guided yet unconstrained way by selecting the alternatives suggested for further refining the search. Dynamic taxonomies have been successfully applied to such diverse and apparently unrelated domains as e-commerce and medical diagnosis. Here, we describe the application of dynamic taxonomies to our database and compare it to traditional relational database query methods. The dynamic taxonomy interface, essentially a point-and-click interface, is considerably faster and less error-prone than traditional form-based query interfaces that require the user to remember and type in the "right" search keywords. Finally, dynamic taxonomy users have confirmed that one of the principal benefits of this approach is the confidence of having considered all the relevant information. Dynamic taxonomies and relational databases work in synergy to provide fast and precise searching: one of the most important factors in timely response to emergencies.

  11. Monitoring outcomes with relational databases: does it improve quality of care?

    Science.gov (United States)

    Clemmer, Terry P

    2004-12-01

    There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.

  12. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  13. A Novel Approach: Chemical Relational Databases, and the ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as

  14. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry: protocol standardization and database expansion for rapid identification of clinically important molds.

    Science.gov (United States)

    Paul, Saikat; Singh, Pankaj; Rudramurthy, Shivaprakash M; Chakrabarti, Arunaloke; Ghosh, Anup K

    2017-12-01

    To standardize the matrix-assisted laser desorption ionization-time of flight mass spectrometry protocols and expansion of existing Bruker Biotyper database for mold identification. Four different sample preparation methods (protocol A, B, C and D) were evaluated. On analyzing each protein extraction method, reliable identification and best log scores were achieved through protocol D. The same protocol was used to identify 153 clinical isolates. Of these 153, 123 (80.3%) were accurately identified by using existing database and remaining 30 (19.7%) were not identified due to unavailability in database. On inclusion of missing main spectrum profile in existing database, all 153 isolates were identified. Matrix-assisted laser desorption ionization-time of flight mass spectrometry can be used for routine identification of clinically important molds.

  15. The Danish Cardiac Rehabilitation Database

    Directory of Open Access Journals (Sweden)

    Zwisler AD

    2016-10-01

    The Regional Research Unit, Region Zealand, Roskilde, 19Department of Medicine, Cardiovascular Research Unit, Regional Hospital Herning, Herning, Denmark Aim of database: The Danish Cardiac Rehabilitation Database (DHRD aims to improve the quality of cardiac rehabilitation (CR to the benefit of patients with coronary heart disease (CHD.Study population: Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward.Main variables: Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year.Descriptive data: Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations.Conclusion: The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients. Keywords: secondary prevention, coronary heart disease, cardiovascular prevention, clinical quality registry

  16. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  17. Hip/femur fractures associated with the use of benzodiazepines (anxiolytics, hypnotics and related drugs) : A methodological approach to assess consistencies across databases from the PROTECT-EU project

    NARCIS (Netherlands)

    Requena, Gema; Huerta, Consuelo; Gardarsdottir, Helga; Logie, John; González-González, Rocío; Abbing-Karahagopian, Victoria; Miret, Montserrat; Schneider, Cornelia; Souverein, Patrick C.; Webb, Dave; Afonso, Ana; Boudiaf, Nada; Martin, Elisa; Oliva, Belén; Alvarez, Arturo; de Groot, Mark C H; Bate, Andrew; Johansson, Saga; Schlienger, Raymond; Reynolds, Robert; Klungel, Olaf H.; de Abajo, Francisco J.

    2016-01-01

    Background: Results from observational studies may be inconsistent because of variations in methodological and clinical factors that may be intrinsically related to the database (DB) where the study is performed. Objectives: The objectives of this paper were to evaluate the impact of applying a

  18. Hip/femur fractures associated with the use of benzodiazepines (anxiolytics, hypnotics and related drugs) : a methodological approach to assess consistencies across databases from the PROTECT-EU project

    NARCIS (Netherlands)

    Requena, Gema; Huerta, Consuelo; Gardarsdottir, Helga; Logie, John; González-González, Rocío; Abbing-Karahagopian, Victoria; Miret, Montserrat; Schneider, Cornelia; Souverein, Patrick C; Webb, Dave; Afonso, Ana; Boudiaf, Nada; Martin, Elisa; Oliva, Belén; Alvarez, Arturo; De Groot, Mark C H; Bate, Andrew; Johansson, Saga; Schlienger, Raymond; Reynolds, Robert; Klungel, Olaf H; de Abajo, Francisco J

    2016-01-01

    BACKGROUND: Results from observational studies may be inconsistent because of variations in methodological and clinical factors that may be intrinsically related to the database (DB) where the study is performed. OBJECTIVES: The objectives of this paper were to evaluate the impact of applying a

  19. Development of a Comprehensive Blast-Related Auditory Injury Database (BRAID)

    Science.gov (United States)

    2016-05-01

    servicemembers included in the Blast-Related Auditory Injury Database. * Training injuries, accidents, and other noncombat injuries. †3,452 injuries...medications, exposures to ototoxic chemicals, recreational noise exposure, and other forms of temporary and persistent threshold shift. Combat marines...AC, Vecchiotti M, Kujawa SG, Lee DJ, Quesnel AM. Otologic outcomes after blast injury: The Boston Marathon experience. Otol Neurotol. 2014; 35(10

  20. A sharable cloud-based pancreaticoduodenectomy collaborative database for physicians: emphasis on security and clinical rule supporting.

    Science.gov (United States)

    Yu, Hwan-Jeu; Lai, Hong-Shiee; Chen, Kuo-Hsin; Chou, Hsien-Cheng; Wu, Jin-Ming; Dorjgochoo, Sarangerel; Mendjargal, Adilsaikhan; Altangerel, Erdenebaatar; Tien, Yu-Wen; Hsueh, Chih-Wen; Lai, Feipei

    2013-08-01

    Pancreaticoduodenectomy (PD) is a major operation with high complication rate. Thereafter, patients may develop morbidity because of the complex reconstruction and loss of pancreatic parenchyma. A well-designed database is very important to address both the short-term and long-term outcomes after PD. The objective of this research was to build an international PD database implemented with security and clinical rule supporting functions, which made the data-sharing easier and improve the accuracy of data. The proposed system is a cloud-based application. To fulfill its requirements, the system comprises four subsystems: a data management subsystem, a clinical rule supporting subsystem, a short message notification subsystem, and an information security subsystem. After completing the surgery, the physicians input the data retrospectively, which are analyzed to study factors associated with post-PD common complications (delayed gastric emptying and pancreatic fistula) to validate the clinical value of this system. Currently, this database contains data from nearly 500 subjects. Five medical centers in Taiwan and two cancer centers in Mongolia are participating in this study. A data mining model of the decision tree analysis showed that elderly patients (>76 years) with pylorus-preserving PD (PPPD) have higher proportion of delayed gastric emptying. About the pancreatic fistula, the data mining model of the decision tree analysis revealed that cases with non-pancreaticogastrostomy (PG) reconstruction - body mass index (BMI)>29.65 or PG reconstruction - BMI>23.7 - non-classic PD have higher proportion of pancreatic fistula after PD. The proposed system allows medical staff to collect and store clinical data in a cloud, sharing the data with other physicians in a secure manner to achieve collaboration in research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. A web-based, relational database for studying glaciers in the Italian Alps

    Science.gov (United States)

    Nigrelli, G.; Chiarle, M.; Nuzzi, A.; Perotti, L.; Torta, G.; Giardino, M.

    2013-02-01

    Glaciers are among the best terrestrial indicators of climate change and thus glacier inventories have attracted a growing, worldwide interest in recent years. In Italy, the first official glacier inventory was completed in 1925 and 774 glacial bodies were identified. As the amount of data continues to increase, and new techniques become available, there is a growing demand for computer tools that can efficiently manage the collected data. The Research Institute for Geo-hydrological Protection of the National Research Council, in cooperation with the Departments of Computer Science and Earth Sciences of the University of Turin, created a database that provides a modern tool for storing, processing and sharing glaciological data. The database was developed according to the need of storing heterogeneous information, which can be retrieved through a set of web search queries. The database's architecture is server-side, and was designed by means of an open source software. The website interface, simple and intuitive, was intended to meet the needs of a distributed public: through this interface, any type of glaciological data can be managed, specific queries can be performed, and the results can be exported in a standard format. The use of a relational database to store and organize a large variety of information about Italian glaciers collected over the last hundred years constitutes a significant step forward in ensuring the safety and accessibility of such data. Moreover, the same benefits also apply to the enhanced operability for handling information in the future, including new and emerging types of data formats, such as geographic and multimedia files. Future developments include the integration of cartographic data, such as base maps, satellite images and vector data. The relational database described in this paper will be the heart of a new geographic system that will merge data, data attributes and maps, leading to a complete description of Italian glacial

  2. Uses and limitations of registry and academic databases.

    Science.gov (United States)

    Williams, William G

    2010-01-01

    A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  3. The Swedish Family-Cancer Database: Update, Application to Colorectal Cancer and Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Hemminki Kari

    2005-01-01

    Full Text Available Abstract The Swedish Family-Cancer Database has been used for almost 10 years in the study of familial risks at all common sites. In the present paper we describe some main features of version VI of this Database, assembled in 2004. This update included all Swedes born in 1932 and later (offspring with their biological parents, a total of 10.5 million individuals. Cancer cases were retrieved from the Swedish Cancer Registry from 1958-2002, including over 1.2 million first and multiple primary cancers and in situ tumours. Compared to previous versions, only 6.0% of deceased offspring with a cancer diagnosis lack any parental information. We show one application of the Database in the study of familial risks in colorectal adenocarcinoma, with defined age-group and anatomic site specific analyses. Familial standardized incidence ratios (SIRs were determined for offspring when parents or sibling were diagnosed with colon or rectal cancer. As a novel finding it was shown that risks for siblings were higher than those for offspring of affected parents. The excess risk was limited to colon cancer and particularly to right-sided colon cancer. The SIRs for colon cancer in age matched populations were 2.58 when parents were probands and 3.81 when siblings were probands; for right-sided colon cancer the SIRs were 3.66 and 7.53, respectively. Thus the familial excess (SIR-1.00 was more than two fold higher for right-sided colon cancer. Colon and rectal cancers appeared to be distinguished between high-penetrant and recessive conditions that only affect the colon, whereas low-penetrant familial effects are shared by the two sites. Epidemiological studies can be used to generate clinical estimates for familial risk, conditioned on numbers of affected family members and their ages of onset. Useful risk estimates have been developed for familial breast and prostate cancers. Reliable risk estimates for other cancers should also be seriously considered for

  4. Current situation and future usage of anticancer drug databases.

    Science.gov (United States)

    Wang, Hongzhi; Yin, Yuanyuan; Wang, Peiqi; Xiong, Chenyu; Huang, Lingyu; Li, Sijia; Li, Xinyi; Fu, Leilei

    2016-07-01

    Cancer is a deadly disease with increasing incidence and mortality rates and affects the life quality of millions of people per year. The past 15 years have witnessed the rapid development of targeted therapy for cancer treatment, with numerous anticancer drugs, drug targets and related gene mutations been identified. The demand for better anticancer drugs and the advances in database technologies have propelled the development of databases related to anticancer drugs. These databases provide systematic collections of integrative information either directly on anticancer drugs or on a specific type of anticancer drugs with their own emphases on different aspects, such as drug-target interactions, the relationship between mutations in drug targets and drug resistance/sensitivity, drug-drug interactions, natural products with anticancer activity, anticancer peptides, synthetic lethality pairs and histone deacetylase inhibitors. We focus on a holistic view of the current situation and future usage of databases related to anticancer drugs and further discuss their strengths and weaknesses, in the hope of facilitating the discovery of new anticancer drugs with better clinical outcomes.

  5. The Danish Smoking Cessation Database

    DEFF Research Database (Denmark)

    Rasmussen, Mette; Tønnesen, Hanne

    2016-01-01

    Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve their qu......‐free. The database is increasingly used in register-based research.......Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve...... their quality. The database was also designed to function as a basis for register-based research projects. Study population The population includes smokers in Denmark who have been receiving a face-to-face SC intervention offered by an SC clinic affiliated with the SCDB. SC clinics can be any organisation...

  6. Rapid storage and retrieval of genomic intervals from a relational database system using nested containment lists.

    Science.gov (United States)

    Wiley, Laura K; Sivley, R Michael; Bush, William S

    2013-01-01

    Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.

  7. Collaborative research between academia and industry using a large clinical trial database: a case study in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Jones Roy

    2011-10-01

    Full Text Available Abstract Background Large clinical trials databases, developed over the course of a comprehensive clinical trial programme, represent an invaluable resource for clinical researchers. Data mining projects sponsored by industry that use these databases, however, are often not viewed favourably in the academic medical community because of concerns that commercial, rather than scientific, goals are the primary purpose of such endeavours. Thus, there are few examples of sustained collaboration between leading academic clinical researchers and industry professionals in a large-scale data mining project. We present here a successful example of this type of collaboration in the field of dementia. Methods The Donepezil Data Repository comprised 18 randomised, controlled trials conducted between 1991 and 2005. The project team at Pfizer determined that the data mining process should be guided by a diverse group of leading Alzheimer's disease clinical researchers called the "Expert Working Group." After development of a list of potential faculty members, invitations were extended and a group of seven members was assembled. The Working Group met regularly with Eisai/Pfizer clinicians and statisticians to discuss the data, identify issues that were currently of interest in the academic and clinical communities that might lend themselves to investigation using these data, and note gaps in understanding or knowledge of Alzheimer's disease that these data could address. Leadership was provided by the Pfizer Clinical Development team leader; Working Group members rotated responsibility for being lead and co-lead for each investigation and resultant publication. Results Six manuscripts, each published in a leading subspecialty journal, resulted from the group's work. Another project resulted in poster presentations at international congresses and two were cancelled due to resource constraints. Conclusions The experience represents a particular approach to

  8. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  9. Resource Survey Relational Database Management System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Mississippi Laboratories employ both enterprise and localized data collection systems for recording data. The databases utilized by these applications range from...

  10. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan

    Directory of Open Access Journals (Sweden)

    Yosuke Ono

    2017-04-01

    Full Text Available Background: Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. Methods: We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedema coma. Results: We identified 149 patients diagnosed with myxedema coma out of approximately 19 million inpatients in the database. The mean (standard deviation age was 77 (12 years, and two-thirds of the patients were female. The overall proportion of in-hospital mortality among cases was 29.5%. The number of patients was highest in the winter season. Patients treated with steroids, catecholamines, or mechanical ventilation showed higher in-hospital mortality than those without. Variations in type and dosage of thyroid hormone replacement were not associated with in-hospital mortality. The most common comorbidity was cardiovascular diseases (40.3%. The estimated incidence of myxedema coma was 1.08 per million people per year in Japan. Multivariable logistic regression analysis revealed that higher age and use of catecholamines (with or without steroids were significantly associated with higher in-hospital mortality. Conclusions: The present study identified the clinical characteristics and outcomes of patients with myxedema coma using a large-scale database. Myxedema coma mortality was independently associated with age and severe conditions requiring treatment with catecholamines.

  11. Seventy Years of RN Effectiveness: A Database Development Project to Inform Best Practice.

    Science.gov (United States)

    Lulat, Zainab; Blain-McLeod, Julie; Grinspun, Doris; Penney, Tasha; Harripaul-Yhap, Anastasia; Rey, Michelle

    2018-03-23

    The appropriate nursing staff mix is imperative to the provision of quality care. Nurse staffing levels and staff mix vary from country to country, as well as between care settings. Understanding how staffing skill mix impacts patient, organizational, and financial outcomes is critical in order to allow policymakers and clinicians to make evidence-informed staffing decisions. This paper reports on the methodology for creation of an electronic database of studies exploring the effectiveness of Registered Nurses (RNs) on clinical and patient outcomes, organizational and nurse outcomes, and financial outcomes. Comprehensive literature searches were conducted in four electronic databases. Inclusion criteria for the database included studies published from 1946 to 2016, peer-reviewed international literature, and studies focused on RNs in all health-care disciplines, settings, and sectors. Masters-prepared nurse researchers conducted title and abstract screening and relevance review to determine eligibility of studies for the database. High-level analysis was conducted to determine key outcomes and the frequency at which they appeared within the database. Of the initial 90,352 records, a total of 626 abstracts were included within the database. Studies were organized into three groups corresponding to clinical and patient outcomes, organizational and nurse-related outcomes, and financial outcomes. Organizational and nurse-related outcomes represented the largest category in the database with 282 studies, followed by clinical and patient outcomes with 244 studies, and lastly financial outcomes, which included 124 studies. The comprehensive database of evidence for RN effectiveness is freely available at https://rnao.ca/bpg/initiatives/RNEffectiveness. The database will serve as a resource for the Registered Nurses' Association of Ontario, as well as a tool for researchers, clinicians, and policymakers for making evidence-informed staffing decisions. © 2018 The Authors

  12. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of “NoSQL” databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to...

  13. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of “NoSQL” databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to...

  14. Discovery of functional and approximate functional dependencies in relational databases

    Directory of Open Access Journals (Sweden)

    Ronald S. King

    2003-01-01

    Full Text Available This study develops the foundation for a simple, yet efficient method for uncovering functional and approximate functional dependencies in relational databases. The technique is based upon the mathematical theory of partitions defined over a relation's row identifiers. Using a levelwise algorithm the minimal non-trivial functional dependencies can be found using computations conducted on integers. Therefore, the required operations on partitions are both simple and fast. Additionally, the row identifiers provide the added advantage of nominally identifying the exceptions to approximate functional dependencies, which can be used effectively in practical data mining applications.

  15. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...... and thereby has more than 25 years' of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health......-related registers in Denmark. Among its research uses, we review record-linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive...

  16. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  17. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  18. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  19. The Danish ventral hernia database

    DEFF Research Database (Denmark)

    Helgstrand, Frederik; Jorgensen, Lars Nannestad

    2016-01-01

    Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation of ...... of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications in recent years.......Aim: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation...... to the surgical repair are recorded. Data registration is mandatory. Data may be merged with other Danish health registries and information from patient questionnaires or clinical examinations. Descriptive data: More than 37,000 operations have been registered. Data have demonstrated high agreement with patient...

  20. [Cardiologic application of a clinical database with graphic extension and its utilization in inter-hospital teleconsultation].

    Science.gov (United States)

    Cervesato, E; Nicolosi, G L; Zanuttini, D

    1995-10-01

    A local area network of personal computers has been operative in our Cardiology Department for seven years, to collect and retrieve on-line character-based data. At present, the network is based on 2 servers and 21 workstations. DBF and DOS files are used by a Clipper 5.2d compiled program to handle demographic data, clinical reports (32,000/year) and diagnostic codes of more than 52,000 patients. In the last two years, we started entring ECG tracings using: RS232 connection, floppy disk transfer, and modem connection with commercially available machines as well as by image scanner. We integrated our clinical database with three dedicated subsystems, written in Assembly and C languages, to manage drawings, digital ECGs and complete reports. Mass storage is provided by a 10 Gbyte magneto-optical disk autochanger physically connected to a dedicated server running an original software manager to optimize routine access to the optical disks. Interhospital network connections were established with two different institutions to allow clinical information sharing, long distance consultation and ECG transfer. The system has been found to be fast, user-friendly and suitable for daily operation of a large cardiological database. Standardized versions of the system are running in seven other cardiology institutions in Italy.

  1. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    Science.gov (United States)

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  2. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan.

    Science.gov (United States)

    Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji

    2017-03-01

    Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedema coma. We identified 149 patients diagnosed with myxedema coma out of approximately 19 million inpatients in the database. The mean (standard deviation) age was 77 (12) years, and two-thirds of the patients were female. The overall proportion of in-hospital mortality among cases was 29.5%. The number of patients was highest in the winter season. Patients treated with steroids, catecholamines, or mechanical ventilation showed higher in-hospital mortality than those without. Variations in type and dosage of thyroid hormone replacement were not associated with in-hospital mortality. The most common comorbidity was cardiovascular diseases (40.3%). The estimated incidence of myxedema coma was 1.08 per million people per year in Japan. Multivariable logistic regression analysis revealed that higher age and use of catecholamines (with or without steroids) were significantly associated with higher in-hospital mortality. The present study identified the clinical characteristics and outcomes of patients with myxedema coma using a large-scale database. Myxedema coma mortality was independently associated with age and severe conditions requiring treatment with catecholamines. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  3. Using Large Diabetes Databases for Research.

    Science.gov (United States)

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  4. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  5. Implementation of the Multidimensional Modeling Concepts into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A key to survival in the business world is being able to analyze, plan and react to changing business conditions as fast as possible. With multidimensional models the managers can explore information at different levels of granularity and the decision makers at all levels can quickly respond to changes in the business climate-the ultimate goal of business intelligence. This paper focuses on the implementation of the multidimensional concepts into object-relational databases.

  6. The Politics of Information: Building a Relational Database To Support Decision-Making at a Public University.

    Science.gov (United States)

    Friedman, Debra; Hoffman, Phillip

    2001-01-01

    Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…

  7. Using a Semi-Realistic Database to Support a Database Course

    Science.gov (United States)

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  8. Surgical-site infections and postoperative complications: agreement between the Danish Gynecological Cancer Database and a randomized clinical trial

    DEFF Research Database (Denmark)

    Antonsen, Sofie L; Meyhoff, Christian Sylvest; Lundvall, Lene

    2011-01-01

    between November 2006 and October 2008 and data from the DGCD. METHODS: Outcomes within 30 days from the trial and the database were compared and levels of agreements were calculated with kappa-statistics. MAIN OUTCOME MEASURES: Primary outcome was surgical-site infection. Other outcomes included re-operation...... registered in the PROXI trial, but not in the DGCD. Agreements between secondary outcomes were very varying (kappa-value 0.77 for re-operation, 0.37 for urinary tract infections, 0.19 for sepsis and 0.18 for pneumonia). CONCLUSIONS: The randomized trial reported significantly more surgical-site infections......OBJECTIVE: Surgical-site infections are serious complications and thorough follow-up is important for accurate surveillance. We aimed to compare the frequency of complications recorded in a clinical quality database with those noted in a randomized clinical trial with follow-up visits. DESIGN...

  9. [Cystic Fibrosis Cloud database: An information system for storage and management of clinical and microbiological data of cystic fibrosis patients].

    Science.gov (United States)

    Prieto, Claudia I; Palau, María J; Martina, Pablo; Achiary, Carlos; Achiary, Andrés; Bettiol, Marisa; Montanaro, Patricia; Cazzola, María L; Leguizamón, Mariana; Massillo, Cintia; Figoli, Cecilia; Valeiras, Brenda; Perez, Silvia; Rentería, Fernando; Diez, Graciela; Yantorno, Osvaldo M; Bosch, Alejandra

    2016-01-01

    The epidemiological and clinical management of cystic fibrosis (CF) patients suffering from acute pulmonary exacerbations or chronic lung infections demands continuous updating of medical and microbiological processes associated with the constant evolution of pathogens during host colonization. In order to monitor the dynamics of these processes, it is essential to have expert systems capable of storing and subsequently extracting the information generated from different studies of the patients and microorganisms isolated from them. In this work we have designed and developed an on-line database based on an information system that allows to store, manage and visualize data from clinical studies and microbiological analysis of bacteria obtained from the respiratory tract of patients suffering from cystic fibrosis. The information system, named Cystic Fibrosis Cloud database is available on the http://servoy.infocomsa.com/cfc_database site and is composed of a main database and a web-based interface, which uses Servoy's product architecture based on Java technology. Although the CFC database system can be implemented as a local program for private use in CF centers, it can also be used, updated and shared by different users who can access the stored information in a systematic, practical and safe manner. The implementation of the CFC database could have a significant impact on the monitoring of respiratory infections, the prevention of exacerbations, the detection of emerging organisms, and the adequacy of control strategies for lung infections in CF patients. Copyright © 2015 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Systematic review of clinical practice guidelines related to multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Jia Guo

    Full Text Available BACKGROUND: High quality clinical practice guidelines (CPGs can provide clinicians with explicit recommendations on how to manage health conditions and bridge the gap between research and clinical practice. Unfortunately, the quality of CPGs for multiple sclerosis (MS has not been evaluated. OBJECTIVE: To evaluate the methodological quality of CPGs on MS using the AGREE II instrument. METHODS: According to the inclusion and exclusion criteria, we searched four databases and two websites related to CPGs, including the Cochrane library, PubMed, EMBASE, DynaMed, the National Guideline Clearinghouse (NGC, and Chinese Biomedical Literature database (CBM. The searches were performed on September 20th 2013. All CPGs on MS were evaluated by the AGREE II instrument. The software used for analysis was SPSS 17.0. RESULTS: A total of 27 CPGs on MS met inclusion criteria. The overall agreement among reviews was good or substantial (ICC was above 0.70. The mean scores for each of all six domains were presented as follows: scope and purpose (mean ± SD: 59.05 ± 16.13, stakeholder involvement (mean ± SD: 29.53 ± 17.67, rigor of development (mean ± SD: 31.52 ± 21.50, clarity of presentation (mean ± SD: 60.39 ± 13.73, applicability (mean ± SD: 27.08 ± 17.66, editorial independence (mean ± SD: 28.70 ± 22.03. CONCLUSIONS: The methodological quality of CPGs for MS was acceptable for scope, purpose and clarity of presentation. The developers of CPGs need to pay more attention to editorial independence, applicability, rigor of development and stakeholder involvement during the development process. The AGREE II instrument should be adopted by guideline developers.

  11. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  12. Integrating pattern mining in relational databases

    NARCIS (Netherlands)

    Calders, T.; Goethals, B.; Prado, A.; Fürnkranz, J.; Scheffer, T.; Spiliopoulou, M.

    2006-01-01

    Almost a decade ago, Imielinski and Mannila introduced the notion of Inductive Databases to manage KDD applications just as DBMSs successfully manage business applications. The goal is to follow one of the key DBMS paradigms: building optimizing compilers for ad hoc queries. During the past decade,

  13. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  14. Building a recruitment database for asthma trials: a conceptual framework for the creation of the UK Database of Asthma Research Volunteers.

    Science.gov (United States)

    Nwaru, Bright I; Soyiri, Ireneous N; Simpson, Colin R; Griffiths, Chris; Sheikh, Aziz

    2016-05-26

    Randomised clinical trials are the 'gold standard' for evaluating the effectiveness of healthcare interventions. However, successful recruitment of participants remains a key challenge for many trialists. In this paper, we present a conceptual framework for creating a digital, population-based database for the recruitment of asthma patients into future asthma trials in the UK. Having set up the database, the goal is to then make it available to support investigators planning asthma clinical trials. The UK Database of Asthma Research Volunteers will comprise a web-based front-end that interactively allows participant registration, and a back-end that houses the database containing participants' key relevant data. The database will be hosted and maintained at a secure server at the Asthma UK Centre for Applied Research based at The University of Edinburgh. Using a range of invitation strategies, key demographic and clinical data will be collected from those pre-consenting to consider participation in clinical trials. These data will, with consent, in due course, be linkable to other healthcare, social, economic, and genetic datasets. To use the database, asthma investigators will send their eligibility criteria for participant recruitment; eligible participants will then be informed about the new trial and asked if they wish to participate. A steering committee will oversee the running of the database, including approval of usage access. Novel communication strategies will be utilised to engage participants who are recruited into the database in order to avoid attrition as a result of waiting time to participation in a suitable trial, and to minimise the risk of their being approached when already enrolled in a trial. The value of this database will be whether it proves useful and usable to researchers in facilitating recruitment into clinical trials on asthma and whether patient privacy and data security are protected in meeting this aim. Successful recruitment is

  15. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan

    OpenAIRE

    Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji

    2017-01-01

    Background: Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. Methods: We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedem...

  16. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.

    Science.gov (United States)

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-12-01

    Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/

  17. Value of shared preclinical safety studies - The eTOX database.

    Science.gov (United States)

    Briggs, Katharine; Barber, Chris; Cases, Montserrat; Marc, Philippe; Steger-Hartmann, Thomas

    2015-01-01

    A first analysis of a database of shared preclinical safety data for 1214 small molecule drugs and drug candidates extracted from 3970 reports donated by thirteen pharmaceutical companies for the eTOX project (www.etoxproject.eu) is presented. Species, duration of exposure and administration route data were analysed to assess if large enough subsets of homogenous data are available for building in silico predictive models. Prevalence of treatment related effects for the different types of findings recorded were analysed. The eTOX ontology was used to determine the most common treatment-related clinical chemistry and histopathology findings reported in the database. The data were then mined to evaluate sensitivity of established in vivo biomarkers for liver toxicity risk assessment. The value of the database to inform other drug development projects during early drug development is illustrated by a case study.

  18. The Danish Collaborative Bacteraemia Network (DACOBAN) database

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus

    2014-01-01

    % of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32...... enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia...

  19. Dansk Hjerteregister--en klinisk database

    DEFF Research Database (Denmark)

    Abildstrøm, Steen Zabell; Kruse, Marie; Rasmussen, Søren

    2008-01-01

    INTRODUCTION: The Danish Heart Registry (DHR) keeps track of all coronary angiographies (CATH), percutaneous coronary interventions (PCI), coronary artery bypass grafting (CABG), and adult heart valve surgery performed in Denmark. DHR is a clinical database established in order to follow the acti......INTRODUCTION: The Danish Heart Registry (DHR) keeps track of all coronary angiographies (CATH), percutaneous coronary interventions (PCI), coronary artery bypass grafting (CABG), and adult heart valve surgery performed in Denmark. DHR is a clinical database established in order to follow...

  20. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  1. A Framework for Mapping User-Designed Forms to Relational Databases

    Science.gov (United States)

    Khare, Ritu

    2011-01-01

    In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…

  2. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    International Nuclear Information System (INIS)

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule≥3 mm,''''nodule<3 mm,'' and ''non-nodule≥3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule≥3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all

  3. Establishment of a regional Danish database for patients with a stoma.

    Science.gov (United States)

    Danielsen, A K; Christensen, B M; Mortensen, J; Voergaard, L L; Herlufsen, P; Balleby, L

    2015-01-01

    To present the Danish Stoma Database Capital Region with clinical variables related to stoma creation including colostomy, ileostomy and urostomy. The stomatherapists in the Capital Region of Denmark developed a database covering patient identifiers, interventions, conditions, short-term outcome, long-term outcome and known major confounders. The completeness of data was validated against the Danish National Patient Register. In 2013, five hospitals included data from 1123 patients who were registered during the year. The types of stomas formed from 2007 to 2013 showed a variation reflecting the subspecialization and surgical techniques in the centres. Between 92 and 94% of patients agreed to participate in the standard programme aimed at handling of the stoma and more than 88% of patients having planned surgery had the stoma site marked pre-operatively. The database is fully operational with high data completeness and with data about patients with a stoma from before surgery up to 12 months after surgery. The database provides a solid basis for professional learning, clinical research and benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  4. The Danish Prostate Cancer Database

    DEFF Research Database (Denmark)

    Nguyen-Nielsen, Mary; Høyer, Søren; Friis, Søren

    2016-01-01

    variables include Gleason scores, cancer staging, prostate-specific antigen values, and therapeutic measures (active surveillance, surgery, radiotherapy, endocrine therapy, and chemotherapy). DESCRIPTIVE DATA: In total, 22,332 patients with prostate cancer were registered in DAPROCAdata as of April 2015......AIM OF DATABASE: The Danish Prostate Cancer Database (DAPROCAdata) is a nationwide clinical cancer database that has prospectively collected data on patients with incident prostate cancer in Denmark since February 2010. The overall aim of the DAPROCAdata is to improve the quality of prostate cancer...... care in Denmark by systematically collecting key clinical variables for the purposes of health care monitoring, quality improvement, and research. STUDY POPULATION: All Danish patients with histologically verified prostate cancer are included in the DAPROCAdata. MAIN VARIABLES: The DAPROCAdata...

  5. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    Science.gov (United States)

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  6. Clinical Databases and Registries in Congenital and Pediatric Cardiac Surgery, Cardiology, Critical Care, and Anesthesiology Worldwide.

    Science.gov (United States)

    Vener, David F; Gaies, Michael; Jacobs, Jeffrey P; Pasquali, Sara K

    2017-01-01

    The growth in large-scale data management capabilities and the successful care of patients with congenital heart defects have coincidentally paralleled each other for the last three decades, and participation in multicenter congenital heart disease databases and registries is now a fundamental component of cardiac care. This manuscript attempts for the first time to consolidate in one location all of the relevant databases worldwide, including target populations, specialties, Web sites, and participation information. Since at least 1,992 cardiac surgeons and cardiologists began leveraging this burgeoning technology to create multi-institutional data collections addressing a variety of specialties within this field. Pediatric heart diseases are particularly well suited to this methodology because each individual care location has access to only a relatively limited number of diagnoses and procedures in any given calendar year. Combining multiple institutions data therefore allows for a far more accurate contemporaneous assessment of treatment modalities and adverse outcomes. Additionally, the data can be used to develop outcome benchmarks by which individual institutions can measure their progress against the field as a whole and focus quality improvement efforts in a more directed fashion, and there is increasing utilization combining clinical research efforts within existing data structures. Efforts are ongoing to support better collaboration and integration across data sets, to improve efficiency, further the utility of the data collection infrastructure and information collected, and to enhance return on investment for participating institutions.

  7. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  8. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    Science.gov (United States)

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  9. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  10. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database

    Science.gov (United States)

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G.; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-01-01

    Motivation: Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Results: Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Availability: Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/ Contact: artemis@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18845581

  11. A database of annotated promoters of genes associated with common respiratory and related diseases

    KAUST Repository

    Chowdhary, Rajesh

    2012-07-01

    Many genes have been implicated in the pathogenesis of common respiratory and related diseases (RRDs), yet the underlying mechanisms are largely unknown. Differential gene expression patterns in diseased and healthy individuals suggest that RRDs affect or are affected by modified transcription regulation programs. It is thus crucial to characterize implicated genes in terms of transcriptional regulation. For this purpose, we conducted a promoter analysis of genes associated with 11 common RRDs including allergic rhinitis, asthma, bronchiectasis, bronchiolitis, bronchitis, chronic obstructive pulmonary disease, cystic fibrosis, emphysema, eczema, psoriasis, and urticaria, many of which are thought to be genetically related. The objective of the present study was to obtain deeper insight into the transcriptional regulation of these disease-associated genes by annotating their promoter regions with transcription factors (TFs) and TF binding sites (TFBSs). We discovered many TFs that are significantly enriched in the target disease groups including associations that have been documented in the literature. We also identified a number of putative TFs/TFBSs that appear to be novel. The results of our analysis are provided in an online database that is freely accessible to researchers at http://www.respiratorygenomics.com. Promoter-associated TFBS information and related genomic features, such as histone modification sites, microsatellites, CpG islands, and SNPs, are graphically summarized in the database. Users can compare and contrast underlying mechanisms of specific RRDs relative to candidate genes, TFs, gene ontology terms, micro-RNAs, and biological pathways for the conduct of metaanalyses. This database represents a novel, useful resource for RRD researchers. Copyright © 2012 by the American Thoracic Society.

  12. A database of annotated promoters of genes associated with common respiratory and related diseases

    KAUST Repository

    Chowdhary, Rajesh; Tan, Sinlam; Pavesi, Giulio; Jin, Gg; Dong, Difeng; Mathur, Sameer K.; Burkart, Arthur; Narang, Vipin; Glurich, Ingrid E.; Raby, Benjamin A.; Weiss, Scott T.; Limsoon, Wong; Liu, Jun; Bajic, Vladimir B.

    2012-01-01

    Many genes have been implicated in the pathogenesis of common respiratory and related diseases (RRDs), yet the underlying mechanisms are largely unknown. Differential gene expression patterns in diseased and healthy individuals suggest that RRDs affect or are affected by modified transcription regulation programs. It is thus crucial to characterize implicated genes in terms of transcriptional regulation. For this purpose, we conducted a promoter analysis of genes associated with 11 common RRDs including allergic rhinitis, asthma, bronchiectasis, bronchiolitis, bronchitis, chronic obstructive pulmonary disease, cystic fibrosis, emphysema, eczema, psoriasis, and urticaria, many of which are thought to be genetically related. The objective of the present study was to obtain deeper insight into the transcriptional regulation of these disease-associated genes by annotating their promoter regions with transcription factors (TFs) and TF binding sites (TFBSs). We discovered many TFs that are significantly enriched in the target disease groups including associations that have been documented in the literature. We also identified a number of putative TFs/TFBSs that appear to be novel. The results of our analysis are provided in an online database that is freely accessible to researchers at http://www.respiratorygenomics.com. Promoter-associated TFBS information and related genomic features, such as histone modification sites, microsatellites, CpG islands, and SNPs, are graphically summarized in the database. Users can compare and contrast underlying mechanisms of specific RRDs relative to candidate genes, TFs, gene ontology terms, micro-RNAs, and biological pathways for the conduct of metaanalyses. This database represents a novel, useful resource for RRD researchers. Copyright © 2012 by the American Thoracic Society.

  13. Design And Implementation Of Tool For Detecting Anti-Patterns In Relational Database

    Directory of Open Access Journals (Sweden)

    Gaurav Kumar

    2017-07-01

    Full Text Available Anti-patterns are poor solution to design and im-plementation problems. Developers may introduce anti-patterns in their software systems because of time pressure lack of understanding communication and or-skills. Anti-patterns create problems in software maintenance and development. Database anti-patterns lead to complex and time consuming query process-ing and loss of integrity constraints. Detecting anti-patterns could reduce costs efforts and resources. Researchers have proposed approaches to detect anti-patterns in software development. But not much research has been done about database anti-patterns. This report presents two approaches to detect schema design anti-patterns in relational database. Our first approach is based on pattern matchingwe look into potential candidates based on schema patterns. Second approach is a machine learning based approach we generate features of possible anti-patterns and build SVMbased classifier to detect them. Here we look into these four anti-patterns a Multi-valued attribute b Nave tree based c Entity Attribute Value and dPolymorphic Association . We measure precision and recall of each approach and compare the results. SVM-based approach provides more precision and recall with more training dataset.

  14. Vomiting and migraine-related clinical parameters in pediatric migraine.

    Science.gov (United States)

    Eidlitz-Markus, Tal; Haimi-Cohen, Yishai; Zeharia, Avraham

    2017-06-01

    To investigate the characteristics of vomiting in pediatric migraineurs and the relationship of vomiting with other migraine-related parameters. The cohort included children and adolescents with migraine attending a headache clinic of a tertiary pediatric medical center from 2010 to 2016. Patients were identified by a retrospective database search. Data were collected from medical files. The presence of vomiting was associated with background and headache-related parameters. The study group included 453 patients, 210 boys (46.4%) and 243 girls (53.6%), of mean age 11.3 ± 3.7 years. Vomiting was reported by 161 patients (35.5%). On comparison of patients with and without vomiting, vomiting was found to be significantly associated with male gender (54% vs 42.1%, P migraine onset (8.0 ± 3. years vs 9.6 ± 3.7 years, P migraine (67% vs 58.7%, P migraine (24.1% vs 10.1%, P migraine in both parents (9.3% vs 3.1%, P = .007), and migraine in either parent (57.5% vs 45.5%, P = .02). The higher rate of vomiting in the younger patients and the patients with awakening pain may be explained by a common underlying pathogenetic mechanism of vomiting and migraine involving autonomic nerve dysfunction/immaturity. The association of vomiting with parental migraine points to a genetic component of vomiting and migraine. It should be noted that some of the findings may simply reflect referral patterns in the tertiary clinic. © 2017 American Headache Society.

  15. The Effect of Relational Database Technology on Administrative Computing at Carnegie Mellon University.

    Science.gov (United States)

    Golden, Cynthia; Eisenberger, Dorit

    1990-01-01

    Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…

  16. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    Science.gov (United States)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  17. The UKNG database: a simple audit tool for interventional neuroradiology

    International Nuclear Information System (INIS)

    Millar, J.S.; Burke, M.

    2007-01-01

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  18. The UKNG database: a simple audit tool for interventional neuroradiology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, J.S.; Burke, M. [Southampton General Hospital, Departments of Neuroradiology and IT, Wessex Neurological Centre, Southampton (United Kingdom)

    2007-06-15

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  19. Predicting Customers Churn in a Relational Database

    Directory of Open Access Journals (Sweden)

    Catalin CIMPOERU

    2014-01-01

    Full Text Available This paper explores how two main classical classification models work and generate predictions through a commercial solution of relational database management system (Microsoft SQL Server 2012. The aim of the paper is to accurately predict churn among a set of customers defined by various discrete and continuous variables, derived from three main data sources: the commercial transactions history; the users’ behavior or events happening on their computers; the specific identity information provided by the customers themselves. On a theoretical side, the paper presents the main concepts and ideas underlying the Decision Tree and Naïve Bayes classifiers and exemplifies some of them with actual hand-made calculations of the data being modeled by the software. On an analytical and practical side, the paper analyzes the graphs and tables generated by the classifying models and also reveal the main data insights. In the end, the classifiers’ accuracy is evaluated based on the test data method. The most accurate one is chosen for generating predictions on the customers’ data where the values of the response variable are not known.

  20. Clinical evaluation of radiation oncology greater area database (ROGAD). From 1992 to 1998

    International Nuclear Information System (INIS)

    Harauchi, Hajime; Inamura, Kiyonari; Umeda, Tokuo

    2001-01-01

    Radiotherapy clinical records of 8,950 cases were collected from 251 hospitals in the period from 1992 to 1998 by the activity of Radiation Oncology Greater Area Database ROGAD under the Japanese Society for Therapeutic Radiology and Oncology JASTRO, and their data were analyzed. Outlines of analysis are presented in this paper and other 5 papers in series. Also follow-up data of 814 cases by the 4th follow-up survey study carried out in 1998 were retrieved and examined. Case distribution survey according to ICD-O code for primary tumor region were worked out. Chronological change of case distribution during these seven years were examined and briefly stated in this paper. Case analyses in terms of 5 portions of topographical region were also done, and 5 papers together with this paper describe the results of the analyses. Data analysis comparison between ROGAD and the regular census revealed that the resulted analyses of collected clinical data by ROGAD from 1992 to 1998 indicated the real world of radiation therapy situation in Japan. One of the reason to state this is that ROGAD covers 34.7% of number of facilities and 36.1% of number of cases treated in Japan. The another reason is that we could reduce the rate of mis-registration and items of blanked registration by means of improvement of registration software with logical check. We made sure from our effort of this ROGAD activity for these 7 years experiences that continuation of the run of this database ROGAD would bring us much more accurate information on the radiation oncology situation in Japan. (author)

  1. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    Science.gov (United States)

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  2. Big Data Mining and Adverse Event Pattern Analysis in Clinical Drug Trials.

    Science.gov (United States)

    Federer, Callie; Yoo, Minjae; Tan, Aik Choon

    2016-12-01

    Drug adverse events (AEs) are a major health threat to patients seeking medical treatment and a significant barrier in drug discovery and development. AEs are now required to be submitted during clinical trials and can be extracted from ClinicalTrials.gov ( https://clinicaltrials.gov/ ), a database of clinical studies around the world. By extracting drug and AE information from ClinicalTrials.gov and structuring it into a database, drug-AEs could be established for future drug development and repositioning. To our knowledge, current AE databases contain mainly U.S. Food and Drug Administration (FDA)-approved drugs. However, our database contains both FDA-approved and experimental compounds extracted from ClinicalTrials.gov . Our database contains 8,161 clinical trials of 3,102,675 patients and 713,103 reported AEs. We extracted the information from ClinicalTrials.gov using a set of python scripts, and then used regular expressions and a drug dictionary to process and structure relevant information into a relational database. We performed data mining and pattern analysis of drug-AEs in our database. Our database can serve as a tool to assist researchers to discover drug-AE relationships for developing, repositioning, and repurposing drugs.

  3. Evaluating parallel relational databases for medical data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  4. A human friendly reporting and database system for brain PET analysis

    International Nuclear Information System (INIS)

    Jamzad, M.; Ishii, Kenji; Toyama, Hinako; Senda, Michio

    1996-01-01

    We have developed a human friendly reporting and database system for clinical brain PET (Positron Emission Tomography) scans, which enables statistical data analysis on qualitative information obtained from image interpretation. Our system consists of a Brain PET Data (Input) Tool and Report Writing Tool. In the Brain PET Data Tool, findings and interpretations are input by selecting menu icons in a window panel instead of writing a free text. This method of input enables on-line data entry into and update of the database by means of pre-defined consistent words, which facilitates statistical data analysis. The Report Writing Tool generates a one page report of natural English sentences semi-automatically by using the above input information and the patient information obtained from our PET center's main database. It also has a keyword selection function from the report text so that we can save a set of keywords on the database for further analysis. By means of this system, we can store the data related to patient information and visual interpretation of the PET examination while writing clinical reports in daily work. The database files in our system can be accessed by means of commercially available databases. We have used the 4th Dimension database that runs on a Macintosh computer and analyzed 95 cases of 18 F-FDG brain PET studies. The results showed high specificity of parietal hypometabolism for Alzheimer's patients. (author)

  5. Historical return on investment and improved quality resulting from development and mining of a hospital laboratory relational database.

    Science.gov (United States)

    Brimhall, Bradley B; Hall, Timothy E; Walczak, Steven

    2006-01-01

    A hospital laboratory relational database, developed over eight years, has demonstrated significant cost savings and a substantial financial return on investment (ROI). In addition, the database has been used to measurably improve laboratory operations and the quality of patient care.

  6. Prevalence of rape-related pregnancy as an indication for abortion at two urban family planning clinics.

    Science.gov (United States)

    Perry, Rachel; Zimmerman, Lindsay; Al-Saden, Iman; Fatima, Aisha; Cowett, Allison; Patel, Ashlesha

    2015-05-01

    We sought to estimate the prevalence of rape-related pregnancy as an indication for abortion at two public Chicago facilities and to describe demographic and clinical correlates of women who terminated rape-related pregnancies. We performed a cross-sectional study of women obtaining abortion at the Center for Reproductive Health (CRH) at University of Illinois Health Sciences Center and Reproductive Health Services (RHS) at John H. Stroger, Jr. Hospital between August 2009 and August 2013. Gestational age limits at CRH and RHS were 23+6 and 13+6weeks, respectively. We estimated the prevalence of rape-related pregnancy based on billing code (CRH) or data from an administrative database (RHS), and examined relationships between rape-related pregnancy and demographic and clinical variables. Included were 19,465 visits for abortion. The majority of patients were Black (85.6%). Prevalence of abortion for rape-related pregnancy was 1.9%, and was higher at CRH (6.9%) than RHS (1.5%). Later gestational age was associated with abortion for rape-related pregnancy (median 12days, prape-related pregnancy at CRH only (prape-related pregnancy than among those terminating for other indications. Rape-related pregnancy as an indication for abortion had a low, but clinically significant prevalence at two urban Chicago family planning centers. Later gestational age was associated with abortion for rape-related pregnancy. Rape-related pregnancy may occur with higher prevalence among some subgroups of women seeking abortion than others. Efforts to address rape-related pregnancy in the abortion care setting are needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A Novel Approach: Chemical Relational Databases, and the Role of the ISSCAN Database on Assessing Chemical Carcinogenity

    Science.gov (United States)

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did no...

  8. An analysis of registered clinical trials in otolaryngology from 2007 to 2010: ClinicalTrials.gov.

    Science.gov (United States)

    Witsell, David L; Schulz, Kristine A; Lee, Walter T; Chiswell, Karen

    2013-11-01

    To describe the conditions studied, interventions used, study characteristics, and funding sources of otolaryngology clinical trials from the ClinicalTrials.gov database; compare this otolaryngology cohort of interventional studies to clinical visits in a health care system; and assess agreement between clinical trials and clinical activity. Database analysis. Trial registration data downloaded from ClinicalTrials.gov and administrative data from the Duke University Medical Center from October 1, 2007 to September 27, 2010. Data extraction from ClinicalTrials.gov was done using MeSH and non-MeSH disease condition terms. Studies were subcategorized to create the following groupings for descriptive analysis: ear, nose, allergy, voice, sleep, head and neck cancer, thyroid, and throat. Duke Health System visits were queried by using selected ICD-9 codes for otolaryngology and non-otolaryngology providers. Visits were grouped similarly to ClinicalTrials.gov for further analysis. Chi-square tests were used to explore differences between groups. A total of 1115 of 40,970 registered interventional trials were assigned to otolaryngology. Head and neck cancer trials predominated. Study models most frequently incorporated parallel design (54.6%), 2 study groups (46.6%), and randomization (69.1%). Phase 2 or 3 studies constituted 46.4% of the cohort. Comparison of the ClinicalTrials.gov database with administrative health system visit data by disease condition showed discordance between national research activity and clinical visit volume for patients with otolaryngology complaints. Analysis of otolaryngology-related clinical research as listed in ClinicalTrials.gov can inform patients, physicians, and policy makers about research focus areas. The relative burden of otolaryngology-associated conditions in our tertiary health system exceeds research activity within the field.

  9. The Danish Nonmelanoma Skin Cancer Dermatology Database

    DEFF Research Database (Denmark)

    Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke

    2016-01-01

    AIM OF DATABASE: The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents...... treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance....

  10. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked &apos

  11. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  12. Coupling computer-interpretable guidelines with a drug-database through a web-based system – The PRESGUID project

    Directory of Open Access Journals (Sweden)

    Fieschi Marius

    2004-03-01

    Full Text Available Abstract Background Clinical Practice Guidelines (CPGs available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. Objective The PRESGUID project (PREScription and GUIDelines aims to improve the assistance provided by guidelines. The project proposes an online service enabling physicians to consult computerized CPGs linked to drug databases for easier integration into the healthcare process. Methods Computable CPGs are structured as decision trees and coded in XML format. Recommendations related to drug classes are tagged with ATC codes. We use a mapping module to enhance computerized guidelines coupling with a drug database, which contains detailed information about each usable specific medication. In this way, therapeutic recommendations are backed up with current and up-to-date information from the database. Results Two authoritative CPGs, originally diffused as static textual documents, have been implemented to validate the computerization process and to illustrate the usefulness of the resulting automated CPGs and their coupling with a drug database. We discuss the advantages of this approach for practitioners and the implications for both guideline developers and drug database providers. Other CPGs will be implemented and evaluated in real conditions by clinicians working in different health institutions.

  13. Existing data sources for clinical epidemiology: the Danish Patient Compensation Association database

    Directory of Open Access Journals (Sweden)

    Tilma J

    2015-07-01

    Full Text Available Jens Tilma,1 Mette Nørgaard,1 Kim Lyngby Mikkelsen,2 Søren Paaske Johnsen1 1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 2Danish Patient Compensation Association, Copenhagen, Denmark Abstract: Any patient in the Danish health care system who experiences a treatment injury can make a compensation claim to the Danish Patient Compensation Association (DPCA free of charge. The aim of this paper is to describe the DPCA database as a source of data for epidemiological research. Data to DPCA are collected prospectively on all claims and include information on patient factors and health records, system factors, and administrative data. Approval of claims is based on injury due to the principle of treatment below experienced specialist standard or intolerable, unexpected extensiveness of injury. Average processing time of a compensation claim is 6–8 months. Data collection is nationwide and started in 1992. The patient's central registration system number, a unique personal identifier, allows for data linkage to other registries such as the Danish National Patient Registry. The DPCA data are accessible for research following data usage permission and make it possible to analyze all claims or specific subgroups to identify predictors, outcomes, etc. DPCA data have until now been used only in few studies but could be a useful data source in future studies of health care-related injuries. Keywords: public health care, treatment injuries, no-fault compensation, registries, research, Denmark

  14. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    OpenAIRE

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reportin...

  15. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  16. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  17. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    Science.gov (United States)

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  18. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  19. Metaphor-related figurative language comprehension in clinical populations: a critical review

    Directory of Open Access Journals (Sweden)

    Maity Siqueira

    2016-12-01

    Full Text Available This paper aims to critically review current studies with respect to definitions,methods, and results on the comprehension of metaphor, metonymy,idioms, and proverbs under the following clinical conditions: aphasia,Alzheimer’s disease, autism, brain injuries, specific language impairment,and Williams Syndrome. A comprehensive search of experimentalpsycholinguistic research was conducted using EBSCOhost, PsychInfo,PUBMED, and Web of Science databases. Thirty-eight studies met thereview inclusion criteria. Results point to deficits in figurative languagecomprehension in all conditions considered, lack of clear definitions ofthe phenomena investigated, and varied methods throughout the sample.Patients’ difficulties are attributed to multiple factors, such as a lack ofTheory of Mind, executive dysfunctions, and poor semantic knowledge.The study of nonliteral aspects of language comprehension in clinicalpopulations reveals a range of disparate impairments. There was no specificfeature about metaphor-related phenomena identified that could, on its own,account for the difficulty some populations have to understand figurativelanguage. Rather, metaphor-related language comprehension difficultiesare often part of pragmatic, linguistic, and/or cognitive impairments.Keywords: Figurative language. Metaphor. Metonymy. Proverb. Clinicalpopulations

  20. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  1. The IAEA inventory databases related to radioactive material entering the marine environment

    International Nuclear Information System (INIS)

    Rastogi, R.C.; Sjoeblom, K.L.

    1999-01-01

    Contracting Parties to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and other Matter (LC 1972) have requested the IAEA to develop an inventory of radioactive material entering the marine environment from all sources. The rationale for developing and maintaining the inventory is related to its use as an information base with which the impact of radionuclides entering the marine environment from different sources can be assessed and compared. Five anthropogenic sources of radionuclides entering the marine environment can be identified. These sources are: radioactive waste disposal at sea; accidents and losses at sea involving radioactive material; discharge of low level liquid effluents from land-based nuclear facilities; the fallout from nuclear weapons testing; and accidental releases from land-based nuclear facilities. The first two of these sources are most closely related to the objective of the LC 1972 and its request to the IAEA. This paper deals with the Agency's work on developing a database on radioactive material entering the marine environment from these two sources. The database has the acronym RAMEM (RAdioactive Material Entering the Marine Environment). It includes two modules: inventory of radioactive waste disposal at sea and inventory of accidents and losses at sea involving radioactive material

  2. Blood pressure variability in relation to outcome in the International Database of Ambulatory blood pressure in relation to Cardiovascular Outcome

    DEFF Research Database (Denmark)

    Stolarz-Skrzypek, Katarzyna; Thijs, Lutgarde; Richart, Tom

    2010-01-01

    Ambulatory blood pressure (BP) monitoring provides information not only on the BP level but also on the diurnal changes in BP. In the present review, we summarized the main findings of the International Database on Ambulatory BP in relation to Cardiovascular Outcome (IDACO) with regard to risk...

  3. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    Science.gov (United States)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  4. SFCOMPO 2.0 - A relational database of spent fuel isotopic measurements, reactor operational histories, and design data

    Science.gov (United States)

    Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian

    2017-09-01

    SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.

  5. A rapid MALDI-TOF MS identification database at genospecies level for clinical and environmental Aeromonas strains.

    Directory of Open Access Journals (Sweden)

    Cinzia Benagli

    Full Text Available The genus Aeromonas has undergone a number of taxonomic and nomenclature revisions over the past 20 years, and new (subspecies and biogroups are continuously described. Standard identification methods such as biochemical characterization have deficiencies and do not allow clarification of the taxonomic position. This report describes the development of a matrix-assisted laser desorption/ionisation-time of flight mass spectrometry (MALDI-TOF MS identification database for a rapid identification of clinical and environmental Aeromonas isolates.

  6. Development of a Database for Study Data in Registration Applications for Veterinary Medicinal Products

    Directory of Open Access Journals (Sweden)

    Anke Finnah

    2017-02-01

    Full Text Available Objective: In the present study, the feasibility of a systematic record of clinical study data from marketing authorisation applications for veterinary medicinal products (VMP and benefits of the selected approach were investigated.Background: Drug registration dossiers for veterinary medicinal products contain extensive data from drug studies, which are not easily accessible to assessors.Evidentiary value: Fast access to these data including specific search tools could facilitate a meaningful use of the data and allow assessors for comparison of test and studies from different dossiers.Methods: First, pivotal test parameters and their mutual relationships were identified. Second, a data model was developed and implemented in a relational database management system, including a data entry form and various reports for database searches. Compilation of study data in the database was demonstrated using all available clinical studies involving VMPs containing the anthelmintic drug Praziquantel. By means of descriptive data analysis possibilities of data evaluation including graphical presentation were shown. Suitability of the database to support the performance of meta-analyses was tentatively validated.Results: The data model was designed to cover the specific requirements arising from study data. A total of 308 clinical studies related to 95 VMPs containing Praziquantel (single agent and combination drugs was selected for prototype testing. The relevant data extracted from these studies were appropriately structured and shown to be basically suitable for descriptive data analyses as well as for meta-analyses.Conclusion: The database-supported collection of study data would provide users with easy access to the continuously increasing pool of scientific information held by competent authorities. It enables specific data analyses. Database design allows expanding the data model to all types of studies and classes of drugs registered in veterinary

  7. Development of a relational database for nuclear material (NM) accounting in RC and I Group

    International Nuclear Information System (INIS)

    Yadav, M.B.; Ramakumar, K.L.; Venugopal, V.

    2011-01-01

    A relational database for the nuclear material accounting in RC and I Group has been developed with MYSQL for Back-End and JAVA for Front-End development. Back-End has been developed to avoid any data redundancy, to provide random access of the data and to retrieve the required information from database easily. JAVA Applet and Java Swing components of JAVA programming have been used in the Front-End development. Front-End has been developed to provide data security, data integrity, to generate inventory status report at the end of accounting period, and also to have a quick look of some required information on computer screen. The database has been tested for the data of three quarters of the year 2009. It has been implemented from 1st January, 2010 for the accounting of nuclear material in RC and I Group. (author)

  8. Development of a relational database for nuclear material (NM) accounting in RC and I Group

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, M B; Ramakumar, K L; Venugopal, V [Radioanalytical Chemistry Division, Radiochemistry and Isotope Group, Bhabha Atomic Research Centre, Mumbai (India)

    2011-07-01

    A relational database for the nuclear material accounting in RC and I Group has been developed with MYSQL for Back-End and JAVA for Front-End development. Back-End has been developed to avoid any data redundancy, to provide random access of the data and to retrieve the required information from database easily. JAVA Applet and Java Swing components of JAVA programming have been used in the Front-End development. Front-End has been developed to provide data security, data integrity, to generate inventory status report at the end of accounting period, and also to have a quick look of some required information on computer screen. The database has been tested for the data of three quarters of the year 2009. It has been implemented from 1st January, 2010 for the accounting of nuclear material in RC and I Group. (author)

  9. The Danish Head and Neck Cancer database

    DEFF Research Database (Denmark)

    Overgaard, Jens; Jovanovic, Aleksandar; Godballe, Christian

    2016-01-01

    of continuous clinical trials and subsequent implementation in national guidelines. The database has furthermore been used to describe the effect of reduced waiting time, changed epidemiology, and influence of comorbidity and socioeconomic parameters. CONCLUSION: Half a century of registration of head and neck......AIM OF THE DATABASE: The Danish Head and Neck Cancer database is a nationwide clinical quality database that contains prospective data collected since the early 1960s. The overall aim of this study was to describe the outcome of the national strategy for multidisciplinary treatment of head and neck......) of cancer in the nasal sinuses, salivary glands, or thyroid gland (corresponding to the International Classification of Diseases, tenth revision, classifications C.01-C.11, C.30-C.32, C.73, and C.80). MAIN VARIABLES: The main variables used in the study were symptoms and the duration of the symptoms...

  10. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.

    2011-01-01

    OBJECTIVE: Increased focus on the quality of health care requires tools and information to address and improve quality. One tool to evaluate and report the quality of clinical health services is quality indicators based on a clinical database. METHOD: The Capital Region of Denmark runs a quality...... database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... for the data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  11. Development of prostate cancer research database with the clinical data warehouse technology for direct linkage with electronic medical record system.

    Science.gov (United States)

    Choi, In Young; Park, Seungho; Park, Bumjoon; Chung, Byung Ha; Kim, Choung-Soo; Lee, Hyun Moo; Byun, Seok-Soo; Lee, Ji Youl

    2013-01-01

    In spite of increased prostate cancer patients, little is known about the impact of treatments for prostate cancer patients and outcome of different treatments based on nationwide data. In order to obtain more comprehensive information for Korean prostate cancer patients, many professionals urged to have national system to monitor the quality of prostate cancer care. To gain its objective, the prostate cancer database system was planned and cautiously accommodated different views from various professions. This prostate cancer research database system incorporates information about a prostate cancer research including demographics, medical history, operation information, laboratory, and quality of life surveys. And, this system includes three different ways of clinical data collection to produce a comprehensive data base; direct data extraction from electronic medical record (EMR) system, manual data entry after linking EMR documents like magnetic resonance imaging findings and paper-based data collection for survey from patients. We implemented clinical data warehouse technology to test direct EMR link method with St. Mary's Hospital system. Using this method, total number of eligible patients were 2,300 from 1997 until 2012. Among them, 538 patients conducted surgery and others have different treatments. Our database system could provide the infrastructure for collecting error free data to support various retrospective and prospective studies.

  12. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  13. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  14. The optimization and validation of the Biotyper MALDI-TOF MS database for the identification of Gram-positive anaerobic cocci

    DEFF Research Database (Denmark)

    Veloo, A C M; de Vries, E D; Jean-Pierre, H

    2016-01-01

    Gram-positive anaerobic cocci (GPAC) account for 24%-31% of the anaerobic bacteria isolated from human clinical specimens. At present, GPAC are under-represented in the Biotyper MALDI-TOF MS database. Profiles of new species have yet to be added. We present the optimization of the matrix-assisted......Gram-positive anaerobic cocci (GPAC) account for 24%-31% of the anaerobic bacteria isolated from human clinical specimens. At present, GPAC are under-represented in the Biotyper MALDI-TOF MS database. Profiles of new species have yet to be added. We present the optimization of the matrix......-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) database for the identification of GPAC. Main spectral profiles (MSPs) were created for 108 clinical GPAC isolates. Identity was confirmed using 16S rRNA gene sequencing. Species identification was considered to be reliable...... if the sequence similarity with its closest relative was ≥98.7%. The optimized database was validated using 140 clinical isolates. The 16S rRNA sequencing identity was compared with the MALDI-TOF MS result. MSPs were added from 17 species that were not yet represented in the MALDI-TOF MS database or were under...

  15. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  16. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  17. Dictionary as Database.

    Science.gov (United States)

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  18. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    Science.gov (United States)

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes.

    Science.gov (United States)

    Ganesan, Kavita; Lloyd, Shane; Sarkar, Vikren

    2016-01-01

    The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and language bias. In addition, these terminologies also provide no quantifiable evidence on how related the concepts are. In this work, we explore an unsupervised graphical approach to mine related concepts by leveraging the volume within large amounts of clinical notes. Our evaluation shows that we are able to use a data driven approach to discovering highly related concepts for various search terms including medications, symptoms and diseases.

  20. Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes

    Directory of Open Access Journals (Sweden)

    Kavita Ganesan

    2016-01-01

    Full Text Available The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and language bias. In addition, these terminologies also provide no quantifiable evidence on how related the concepts are. In this work, we explore an unsupervised graphical approach to mine related concepts by leveraging the volume within large amounts of clinical notes. Our evaluation shows that we are able to use a data driven approach to discovering highly related concepts for various search terms including medications, symptoms and diseases.

  1. Relational Database Extension Oriented, Self-adaptive Imagery Pyramid Model

    Directory of Open Access Journals (Sweden)

    HU Zhenghua

    2015-06-01

    Full Text Available With the development of remote sensing technology, especially the improvement of sensor resolution, the amount of image data is increasing. This puts forward higher requirements to manage huge amount of data efficiently and intelligently. And how to access massive remote sensing data with efficiency and smartness becomes an increasingly popular topic. In this paper, against current development status of Spatial Data Management System, we proposed a self-adaptive strategy for image blocking and a method for LoD(level of detailmodel construction that adapts, with the combination of database storage, network transmission and the hardware of the client. Confirmed by experiments, this imagery management mechanism can achieve intelligent and efficient storage and access in a variety of different conditions of database, network and client. This study provides a feasible idea and method for efficient image data management, contributing to the efficient access and management for remote sensing image data which are based on database technology under network environment of C/S architecture.

  2. Flashflood-related mortality in southern France: first results from a new database

    Directory of Open Access Journals (Sweden)

    Vinet Freddy

    2016-01-01

    Full Text Available Over the last 25 years, flash floods in the South of France have killed almost 250 people. The protection of prone populations is a priority for the French government. It is also a goal of the 2007 European flood directive. However, no accurate database exists gathering the fatalities due to floods in France. Fatalities are supposed to be rare and hazardous, mainly due to individual behaviour. A Ph. D. work has initiated the building of a database gathering a detailed analysis of the circumstances of death and the profiles of the deceased (age, gender…. The study area covers the French Mediterranean departments prone to flash floods over the period 1988-2015. This presentation details the main features of the sample, 244 fatalities collected through newspapers completed with field surveys near police services and municipalities. The sample is broken down between huge events that account for two thirds of the fatalities and “small” events (34 % of the fatalities. Deaths at home account for 35 % of the total number of fatalities, mainly during huge events. 30 % of fatalities are related to vehicles. The last part of the work explains the relations between fatalities and prevention and how better knowledge of flood-related deaths can help to improve flood prevention. The given example shows the relationship between flood forecasting and fatalities. Half of the deaths took place in a small watershed (<150 km2. It emphasizes the need for the dissemination of a complementary system of flash flood forecast based on forecasted rainfall depth and adapted to small watersheds.

  3. Advances in knowledge discovery in databases

    CERN Document Server

    Adhikari, Animesh

    2015-01-01

    This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.  

  4. Guided Imagery and Music Bibliography and GIM/Related Literature Refworks Database

    DEFF Research Database (Denmark)

    Bonde, Lars Ole

    2010-01-01

    Bibliografi og database over litteratur om den receptive musikterapimetode Guided Imagery and Music......Bibliografi og database over litteratur om den receptive musikterapimetode Guided Imagery and Music...

  5. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public......INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...

  6. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  7. A consolidated and standardized relational database for ER data

    International Nuclear Information System (INIS)

    Zygmunt, B.C.

    1995-01-01

    The three US Department of Energy (DOE) installations on the Oak Ridge Reservation (ORR) (Oak Ridge National Laboratory, Y-12, and K-25) were established during World War II as part of the Manhattan Project that ''built the bomb.'' That research, and work in more recent years, has resulted in the generation of radioactive materials and other toxic wastes. Lockheed Martin Energy Systems manages the three Oak Ridge installations (as well as the Environmental Restoration (ER) programs at the DOE plants in Portsmouth, Ohio, and Paducah, Kentucky). DOE Oak Ridge Operations has been mandated by federal and state agreements to provide a consolidated repository of environmental data and is tasked to support environmental data management activities at all five installations. The Oak Ridge Environmental Information System (OREIS) was initiated to fulfill these requirements. The primary use of OREIS data is to provide access to project results by regulators. A secondary use is to serve as background data for other projects. This paper discusses the benefits of a consolidated and standardized database; reasons for resistance to the consolidation of data; implementing a consolidated database, including attempts at standardization, deciding what to include in the consolidated database, establishing lists of valid values, and addressing quality control (QC) issues; and the evolution of a consolidated database, which includes developing and training a user community, dealing with configuration control issues, and incorporating historical data. OREIS is used to illustrate these topics

  8. Development of the ECODAB into a relational database for Escherichia coli O-antigens and other bacterial polysaccharides.

    Science.gov (United States)

    Rojas-Macias, Miguel A; Ståhle, Jonas; Lütteke, Thomas; Widmalm, Göran

    2015-03-01

    Escherichia coli O-antigen database (ECODAB) is a web-based application to support the collection of E. coli O-antigen structures, polymerase and flippase amino acid sequences, NMR chemical shift data of O-antigens as well as information on glycosyltransferases (GTs) involved in the assembly of O-antigen polysaccharides. The database content has been compiled from scientific literature. Furthermore, the system has evolved from being a repository to one that can be used for generating novel data on its own. GT specificity is suggested through sequence comparison with GTs whose function is known. The migration of ECODAB to a relational database has allowed the automation of all processes to update, retrieve and present information, thereby, endowing the system with greater flexibility and improved overall performance. ECODAB is freely available at http://www.casper.organ.su.se/ECODAB/. Currently, data on 169 E. coli unique O-antigen entries and 338 GTs is covered. Moreover, the scope of the database has been extended so that polysaccharide structure and related information from other bacteria subsequently can be added, for example, from Streptococcus pneumoniae. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  10. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  11. Querying clinical data in HL7 RIM based relational model with morph-RDB.

    Science.gov (United States)

    Priyatna, Freddy; Alonso-Calvo, Raul; Paraiso-Medina, Sergio; Corcho, Oscar

    2017-10-05

    Semantic interoperability is essential when carrying out post-genomic clinical trials where several institutions collaborate, since researchers and developers need to have an integrated view and access to heterogeneous data sources. One possible approach to accommodate this need is to use RDB2RDF systems that provide RDF datasets as the unified view. These RDF datasets may be materialized and stored in a triple store, or transformed into RDF in real time, as virtual RDF data sources. Our previous efforts involved materialized RDF datasets, hence losing data freshness. In this paper we present a solution that uses an ontology based on the HL7 v3 Reference Information Model and a set of R2RML mappings that relate this ontology to an underlying relational database implementation, and where morph-RDB is used to expose a virtual, non-materialized SPARQL endpoint over the data. By applying a set of optimization techniques on the SPARQL-to-SQL query translation algorithm, we can now issue SPARQL queries to the underlying relational data with generally acceptable performance.

  12. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  13. The Danish Microbiology Database (MiBa) 2010 to 2013

    DEFF Research Database (Denmark)

    Voldstedlund, M; Haarh, M; Mølbak, K

    2014-01-01

    The Danish Microbiology Database (MiBa) is a national database that receives copies of reports from all Danish departments of clinical microbiology. The database was launched in order to provide healthcare personnel with nationwide access to microbiology reports and to enable real-time surveillance...

  14. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  15. The Danish Nonmelanoma Skin Cancer Dermatology Database.

    Science.gov (United States)

    Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke; Vinding, Gabrielle Randskov; Stender, Ida Marie; Jemec, Gregor Borut Ernst; Vestergaard, Tine; Thormann, Henrik; Hædersdal, Merete; Dam, Tomas Norman; Olesen, Anne Braae

    2016-01-01

    The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents a significant challenge in terms of public health management and health care costs. However, high-quality epidemiological and treatment data on NMSC are sparse. The NMSC database includes patients with the following skin tumors: basal cell carcinoma (BCC), squamous cell carcinoma, Bowen's disease, and keratoacanthoma diagnosed by the participating office-based dermatologists in Denmark. Clinical and histological diagnoses, BCC subtype, localization, size, skin cancer history, skin phototype, and evidence of metastases and treatment modality are the main variables in the NMSC database. Information on recurrence, cosmetic results, and complications are registered at two follow-up visits at 3 months (between 0 and 6 months) and 12 months (between 6 and 15 months) after treatment. In 2014, 11,522 patients with 17,575 tumors were registered in the database. Of tumors with a histological diagnosis, 13,571 were BCCs, 840 squamous cell carcinomas, 504 Bowen's disease, and 173 keratoakanthomas. The NMSC database encompasses detailed information on the type of tumor, a variety of prognostic factors, treatment modalities, and outcomes after treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance.

  16. Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease

    Science.gov (United States)

    Moskalev, Alexey; Chernyagina, Elizaveta; de Magalhães, João Pedro; Barardo, Diogo; Thoppil, Harikrishnan; Shaposhnikov, Mikhail; Budovsky, Arie; Fraifeld, Vadim E.; Garazha, Andrew; Tsvetkov, Vasily; Bronovitsky, Evgeny; Bogomolov, Vladislav; Scerbacov, Alexei; Kuryan, Oleg; Gurinovich, Roman; Jellen, Leslie C.; Kennedy, Brian; Mamoshina, Polina; Dobrovolskaya, Evgeniya; Aliper, Alex; Kaminsky, Dmitry; Zhavoronkov, Alex

    2015-01-01

    As the level of interest in aging research increases, there is a growing number of geroprotectors, or therapeutic interventions that aim to extend the healthy lifespan and repair or reduce aging-related damage in model organisms and, eventually, in humans. There is a clear need for a manually-curated database of geroprotectors to compile and index their effects on aging and age-related diseases and link these effects to relevant studies and multiple biochemical and drug databases. Here, we introduce the first such resource, Geroprotectors (http://geroprotectors.org). Geroprotectors is a public, rapidly explorable database that catalogs over 250 experiments involving over 200 known or candidate geroprotectors that extend lifespan in model organisms. Each compound has a comprehensive profile complete with biochemistry, mechanisms, and lifespan effects in various model organisms, along with information ranging from chemical structure, side effects, and toxicity to FDA drug status. These are presented in a visually intuitive, efficient framework fit for casual browsing or in-depth research alike. Data are linked to the source studies or databases, providing quick and convenient access to original data. The Geroprotectors database facilitates cross-study, cross-organism, and cross-discipline analysis and saves countless hours of inefficient literature and web searching. Geroprotectors is a one-stop, knowledge-sharing, time-saving resource for researchers seeking healthy aging solutions. PMID:26342919

  17. The FH mutation database: an online database of fumarate hydratase mutations involved in the MCUL (HLRCC tumor syndrome and congenital fumarase deficiency

    Directory of Open Access Journals (Sweden)

    Tomlinson Ian PM

    2008-03-01

    unify all current genetic knowledge of FH variants. We believe that this knowledge will assist clinical geneticists and treating physicians when advising patients and their families, will provide a rapid and convenient resource for research scientists, and may eventually assist in gaining novel insights into FH and its related clinical syndromes.

  18. Noonan syndrome and clinically related disorders

    Science.gov (United States)

    Tartaglia, Marco; Gelb, Bruce D.; Zenker, Martin

    2010-01-01

    Noonan syndrome is a relatively common, clinically variable developmental disorder. Cardinal features include postnatally reduced growth, distinctive facial dysmorphism, congenital heart defects and hypertrophic cardiomyopathy, variable cognitive deficit and skeletal, ectodermal and hematologic anomalies. Noonan syndrome is transmitted as an autosomal dominant trait, and is genetically heterogeneous. So far, heterozygous mutations in nine genes (PTPN11, SOS1, KRAS, NRAS, RAF1, BRAF, SHOC2, MEK1 and CBL) have been documented to underlie this disorder or clinically related phenotypes. Based on these recent discoveries, the diagnosis can now be confirmed molecularly in approximately 75% of affected individuals. Affected genes encode for proteins participating in the RAS-mitogen-activated protein kinases (MAPK) signal transduction pathway, which is implicated in several developmental processes controlling morphology determination, organogenesis, synaptic plasticity and growth. Here, we provide an overview of clinical aspects of this disorder and closely related conditions, the molecular mechanisms underlying pathogenesis, and major genotype-phenotype correlations. PMID:21396583

  19. SFCOMPO 2.0 – A relational database of spent fuel isotopic measurements, reactor operational histories, and design data

    Directory of Open Access Journals (Sweden)

    Michel-Sendis Franco

    2017-01-01

    Full Text Available SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA. The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.

  20. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    Science.gov (United States)

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  1. OCL2Trigger: Deriving active mechanisms for relational databases using Model-Driven Architecture

    OpenAIRE

    Al-Jumaily, Harith T.; Cuadra, Dolores; Martínez, Paloma

    2008-01-01

    16 pages, 10 figures.-- Issue title: "Best papers from the 2007 Australian Software Engineering Conference (ASWEC 2007), Melbourne, Australia, April 10-13, 2007, Australian Software Engineering Conference 2007". Transforming integrity constraints into active rules or triggers for verifying database consistency produces a serious and complex problem related to real time behaviour that must be considered for any implementation. Our main contribution to this work is to provide a complete appr...

  2. An Algorithm for Building an Electronic Database.

    Science.gov (United States)

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  3. DataCell: Exploiting the Power of Relational Databases for Efficient Stream Processing

    NARCIS (Netherlands)

    E. Liarou (Erietta); M.L. Kersten (Martin)

    2009-01-01

    htmlabstractDesigned for complex event processing, DataCell is a research prototype database system in the area of sensor stream systems. Under development at CWI, it belongs to the MonetDB database system family. CWI researchers innovatively built a stream engine directly on top of a database

  4. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  5. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  6. Migration Between NoSQL Databases

    OpenAIRE

    Opačak, Damir

    2013-01-01

    The thesis discusses the differences and, consequently, potential problems that may arise when migrating between different types of NoSQL databases. The first chapters introduce the reader to the issues of relational databases and present the beginnings of NoSQL databases. The following chapters present different types of NoSQL databases and some of their representatives with the aim to show specific features of NoSQL databases and the fact that each of them was developed to solve specifi...

  7. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  8. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.

  9. A method to implement fine-grained access control for personal health records through standard relational database queries.

    Science.gov (United States)

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. The use of intelligent database systems in acute pancreatitis--a systematic review.

    Science.gov (United States)

    van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John

    2014-01-01

    Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  11. Open-access MIMIC-II database for intensive care research.

    Science.gov (United States)

    Lee, Joon; Scott, Daniel J; Villarroel, Mauricio; Clifford, Gari D; Saeed, Mohammed; Mark, Roger G

    2011-01-01

    The critical state of intensive care unit (ICU) patients demands close monitoring, and as a result a large volume of multi-parameter data is collected continuously. This represents a unique opportunity for researchers interested in clinical data mining. We sought to foster a more transparent and efficient intensive care research community by building a publicly available ICU database, namely Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II). The data harnessed in MIMIC-II were collected from the ICUs of Beth Israel Deaconess Medical Center from 2001 to 2008 and represent 26,870 adult hospital admissions (version 2.6). MIMIC-II consists of two major components: clinical data and physiological waveforms. The clinical data, which include patient demographics, intravenous medication drip rates, and laboratory test results, were organized into a relational database. The physiological waveforms, including 125 Hz signals recorded at bedside and corresponding vital signs, were stored in an open-source format. MIMIC-II data were also deidentified in order to remove protected health information. Any interested researcher can gain access to MIMIC-II free of charge after signing a data use agreement and completing human subjects training. MIMIC-II can support a wide variety of research studies, ranging from the development of clinical decision support algorithms to retrospective clinical studies. We anticipate that MIMIC-II will be an invaluable resource for intensive care research by stimulating fair comparisons among different studies.

  12. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  13. Using a relational database to improve mortality and length of stay for a department of surgery: a comparative review of 5200 patients.

    Science.gov (United States)

    Ang, Darwin N; Behrns, Kevin E

    2013-07-01

    The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.

  14. An image database structure for pediatric radiology

    International Nuclear Information System (INIS)

    Mankovich, N.J.

    1987-01-01

    The operation of the Clinical Radiology Imaging System (CRIS) in Pediatric Radiology at UCLA relies on the orderly flow of text and image data among the three basic subsystems including acquisition, storage, and display. CRIS provides the radiologist, clinician, and technician with data at clinical image workstations by maintaining comprehensive database. CRIS is made up of sub-systems, each composed of one more programs or tasks which operate in parallel on a VAX-11/750 microcomputer in Pediatric Radiology. Tasks are coordinated through dynamic data structures that include system event flags and disk-resident queues. This report outlines: (1) the CRIS data model, (2) the flow of information among CRIS components, (3) the underlying database structures which support the acquisition, display, and storage of text and image information, and (4) current database statistics

  15. The Scandinavian baltic pancreatic club (SBPC) database

    DEFF Research Database (Denmark)

    Olesen, Søren S; Poulsen, Jakob Lykke; Drewes, Asbjørn M

    2017-01-01

    OBJECTIVES: Chronic pancreatitis (CP) is a multifaceted disease associated with several risk factors and a complex clinical presentation. We established the Scandinavian Baltic Pancreatic Club (SBPC) Database to characterise and study the natural history of CP in a Northern European cohort. Here......, we describe the design of the database and characteristics of the study cohort. METHODS: Nine centres from six different countries in the Scandinavian-Baltic region joined the database. Patients with definitive or probable CP (M-ANNHEIM diagnostic criteria) were included. Standardised case report...... forms were used to collect several assessment variables including disease aetiology, duration of CP, preceding acute pancreatitis, as well as symptoms, complications, and treatments. The clinical stage of CP was characterised according to M-ANNNHEIM. Yearly follow-up is planned for all patients. RESULTS...

  16. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer....... The thesis discusses the VR-tree, an extension of the R-tree that enables observer relative data extraction. To support incremental observer position relative data extraction the thesis proposes the Volatile Access Structure (VAST). VAST is a main memory structure that caches nodes of the VR-tree. VAST...

  17. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  18. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  19. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  20. The FREGAT biobank: a clinico-biological database dedicated to esophageal and gastric cancers.

    Science.gov (United States)

    Mariette, Christophe; Renaud, Florence; Piessen, Guillaume; Gele, Patrick; Copin, Marie-Christine; Leteurtre, Emmanuelle; Delaeter, Christine; Dib, Malek; Clisant, Stéphanie; Harter, Valentin; Bonnetain, Franck; Duhamel, Alain; Christophe, Véronique; Adenis, Antoine

    2018-02-06

    While the incidence of esophageal and gastric cancers is increasing, the prognosis of these cancers remains bleak. Endoscopy and surgery are the standard treatments for localized tumors, but multimodal treatments, associated chemotherapy, targeted therapies, immunotherapy, radiotherapy, and surgery are needed for the vast majority of patients who present with locally advanced or metastatic disease at diagnosis. Although survival has improved, most patients still present with advanced disease at diagnosis. In addition, most patients exhibit a poor or incomplete response to treatment, experience early recurrence and have an impaired quality of life. Compared with several other cancers, the therapeutic approach is not personalized, and research is much less developed. It is, therefore, urgent to hasten the development of research protocols, and consequently, develop a large, ambitious and innovative tool through which future scientific questions may be answered. This research must be patient-related so that rapid feedback to the bedside is achieved and should aim to identify clinical-, biological- and tumor-related factors that are associated with treatment resistance. Finally, this research should also seek to explain epidemiological and social facets of disease behavior. The prospective FREGAT database, established by the French National Cancer Institute, is focused on adult patients with carcinomas of the esophagus and stomach and on whatever might be the tumor stage or therapeutic strategy. The database includes epidemiological, clinical, and tumor characteristics data as well as follow-up, human and social sciences quality of life data, along with a tumor and serum bank. This innovative method of research will allow for the banking of millions of data for the development of excellent basic, translational and clinical research programs for esophageal and gastric cancer. This will ultimately improve general knowledge of these diseases, therapeutic strategies and

  1. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  2. Building an integrated neurodegenerative disease database at an academic health center.

    Science.gov (United States)

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  3. Urate levels predict survival in amyotrophic lateral sclerosis: Analysis of the expanded Pooled Resource Open-Access ALS clinical trials database.

    Science.gov (United States)

    Paganoni, Sabrina; Nicholson, Katharine; Chan, James; Shui, Amy; Schoenfeld, David; Sherman, Alexander; Berry, James; Cudkowicz, Merit; Atassi, Nazem

    2018-03-01

    Urate has been identified as a predictor of amyotrophic lateral sclerosis (ALS) survival in some but not all studies. Here we leverage the recent expansion of the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database to study the association between urate levels and ALS survival. Pooled data of 1,736 ALS participants from the PRO-ACT database were analyzed. Cox proportional hazards regression models were used to evaluate associations between urate levels at trial entry and survival. After adjustment for potential confounders (i.e., creatinine and body mass index), there was an 11% reduction in risk of reaching a survival endpoint during the study with each 1-mg/dL increase in uric acid levels (adjusted hazard ratio 0.89, 95% confidence interval 0.82-0.97, P ALS and confirms the utility of the PRO-ACT database as a powerful resource for ALS epidemiological research. Muscle Nerve 57: 430-434, 2018. © 2017 Wiley Periodicals, Inc.

  4. Web interfaces to relational databases

    Science.gov (United States)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  5. The Danish Anaesthesia Database

    Directory of Open Access Journals (Sweden)

    Antonsen K

    2016-10-01

    Full Text Available Kristian Antonsen,1 Charlotte Vallentin Rosenstock,2 Lars Hyldborg Lundstrøm2 1Board of Directors, Copenhagen University Hospital, Bispebjerg and Frederiksberg Hospital, Capital Region of Denmark, Denmark; 2Department of Anesthesiology, Copenhagen University Hospital, Nordsjællands Hospital-Hillerød, Capital Region of Denmark, Denmark Aim of database: The aim of the Danish Anaesthesia Database (DAD is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. Study population: The DAD was founded in 2004 as a part of Danish Clinical Registries (Regionernes Kliniske Kvalitetsudviklings Program [RKKP]. Patients undergoing general anesthesia, regional anesthesia with or without combined general anesthesia as well as patients under sedation are registered. Data are retrieved from public and private anesthesia clinics, single-centers as well as multihospital corporations across Denmark. In 2014 a total of 278,679 unique entries representing a national coverage of ~70% were recorded, data completeness is steadily increasing. Main variable: Records are aggregated for determining 13 defined quality indicators and eleven defined complications all covering the anesthetic process from the preoperative assessment through anesthesia and surgery until the end of the postoperative recovery period. Descriptive data: Registered variables include patients' individual social security number (assigned to all Danes and both direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment. In addition, an annual DAD report is a benchmark for departments nationwide. Conclusion: The DAD is covering the

  6. Applying the theory of planned behavior: nursing students' intention to seek clinical experiences using the essential clinical behavior database.

    Science.gov (United States)

    Meyer, Linda

    2002-03-01

    This study examined the antecedents and determinants predictive of whether nursing students (N = 92) intend to ask for assignments to perform nursing behaviors after using a database to record essential clinical behaviors. The results of applying the theory of planned behavior (TPB) to behavioral intention using multivariant path analysis suggested that the endogenous variables, attitude and subjective norms, had a significant effect on the intention to ask for assignments to perform nursing behaviors. In addition, it was primarily through attitudes and subjective norms that the respective antecedents or exogenous variables, behavioral beliefs and normative beliefs, affected the intention to ask for assignments to perform nursing behaviors. The lack of direct influence of perceived behavioral control on intention and the direct negative impact of control belief on intention were contrary to expectations, given the tenets of the TPB.

  7. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  8. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  9. Utilization of a Clinical Trial Management System for the Whole Clinical Trial Process as an Integrated Database: System Development.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Koo, HaYeong; Yoo, Soyoung; Choi, Chang-Min; Beck, Sung-Ho; Kim, Tae Won

    2018-04-24

    Clinical trials pose potential risks in both communications and management due to the various stakeholders involved when performing clinical trials. The academic medical center has a responsibility and obligation to conduct and manage clinical trials while maintaining a sufficiently high level of quality, therefore it is necessary to build an information technology system to support standardized clinical trial processes and comply with relevant regulations. The objective of the study was to address the challenges identified while performing clinical trials at an academic medical center, Asan Medical Center (AMC) in Korea, by developing and utilizing a clinical trial management system (CTMS) that complies with standardized processes from multiple departments or units, controlled vocabularies, security, and privacy regulations. This study describes the methods, considerations, and recommendations for the development and utilization of the CTMS as a consolidated research database in an academic medical center. A task force was formed to define and standardize the clinical trial performance process at the site level. On the basis of the agreed standardized process, the CTMS was designed and developed as an all-in-one system complying with privacy and security regulations. In this study, the processes and standard mapped vocabularies of a clinical trial were established at the academic medical center. On the basis of these processes and vocabularies, a CTMS was built which interfaces with the existing trial systems such as the electronic institutional review board health information system, enterprise resource planning, and the barcode system. To protect patient data, the CTMS implements data governance and access rules, and excludes 21 personal health identifiers according to the Health Insurance Portability and Accountability Act (HIPAA) privacy rule and Korean privacy laws. Since December 2014, the CTMS has been successfully implemented and used by 881 internal and

  10. Nuclear technology databases and information network systems

    International Nuclear Information System (INIS)

    Iwata, Shuichi; Kikuchi, Yasuyuki; Minakuchi, Satoshi

    1993-01-01

    This paper describes the databases related to nuclear (science) technology, and information network. Following contents are collected in this paper: the database developed by JAERI, ENERGY NET, ATOM NET, NUCLEN nuclear information database, INIS, NUclear Code Information Service (NUCLIS), Social Application of Nuclear Technology Accumulation project (SANTA), Nuclear Information Database/Communication System (NICS), reactor materials database, radiation effects database, NucNet European nuclear information database, reactor dismantling database. (J.P.N.)

  11. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  12. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  13. Computer Aided Design for Soil Classification Relational Database ...

    African Journals Online (AJOL)

    The paper focuses on the problems associated with classification, storage and retrieval of information on soil data, such as the incompatibility of soil data semantics; inadequate documentation, and lack of indexing; hence it is pretty difficult to efficiently access large database. Consequently, information on soil is very difficult ...

  14. The PMDB Protein Model Database

    Science.gov (United States)

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  15. Relationship between drug interactions and drug-related negative clinical outcomes in two community pharmacies

    Directory of Open Access Journals (Sweden)

    Gonzalo M

    2009-03-01

    Full Text Available Drug interactions may represent an iatrogenic risk that should be controlled in community pharmacies at the dispensing level. Aim: We analyzed the association between potential drug-drug interactions (DDIs and negative clinical outcomes.Methods: We used dispensing data from two community pharmacies: instances where drug dispensing was associated with a potential DDI and a comparison group of randomized dispensing operations with no potential DDI. In cases where potential DDIs were detected, we analyzed the underlying negative clinical outcomes. Age and gender data were included in the analysis.Results: During the study period, we registered 417 potential DDIs. The proportion of women and age were higher in the study group than in the comparison group. The average potential DDIs per patient was 1.31 (SD=0.72. The Consejo General de Colegios Oficiales de Farmacéuticos (CGCOF database did not produce an alert in 2.4% of the cases. Over-the-counter medication use was observed in 5% of the potential DDI cases. The drugs most frequently involved in potential DDIs were acenocoumarol, calcium salts, hydrochlorothiazide, and alendronic acid, whereas the most predominant potential DDIs were calcium salts and bisphosphonates, oral antidiabetics and thiazide diuretics, antidiabetics and glucose, and oral anticoagulant and paracetamol. The existence of a drug-related negative clinical outcome was observed only in 0.96% of the potential DDI cases (50% safety cases and 50% effectiveness cases. Conclusions: Only a small proportion of the detected potential DDIs lead to medication negative outcomes. Considering the drug-related negative clinical outcomes encountered, tighter control would be recommended in potential DDIs with NSAIDs or benzodiazepines.

  16. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  17. Research progress in muscle-derived stem cells: Literature retrieval results based on international database.

    Science.gov (United States)

    Zhang, Li; Wang, Wei

    2012-04-05

    To identify global research trends of muscle-derived stem cells (MDSCs) using a bibliometric analysis of the Web of Science, Research Portfolio Online Reporting Tools of the National Institutes of Health (NIH), and the Clinical Trials registry database (ClinicalTrials.gov). We performed a bibliometric analysis of data retrievals for MDSCs from 2002 to 2011 using the Web of Science, NIH, and ClinicalTrials.gov. (1) Web of Science: (a) peer-reviewed articles on MDSCs that were published and indexed in the Web of Science. (b) Type of articles: original research articles, reviews, meeting abstracts, proceedings papers, book chapters, editorial material and news items. (c) Year of publication: 2002-2011. (d) Citation databases: Science Citation Index-Expanded (SCI-E), 1899-present; Conference Proceedings Citation Index-Science (CPCI-S), 1991-present; Book Citation Index-Science (BKCI-S), 2005-present. (2) NIH: (a) Projects on MDSCs supported by the NIH. (b) Fiscal year: 1988-present. (3) ClinicalTrials.gov: All clinical trials relating to MDSCs were searched in this database. (1) Web of Science: (a) Articles that required manual searching or telephone access. (b) We excluded documents that were not published in the public domain. (c) We excluded a number of corrected papers from the total number of articles. (d) We excluded articles from the following databases: Social Sciences Citation Index (SSCI), 1898-present; Arts & Humanities Citation Index (A&HCI), 1975-present; Conference Proceedings Citation Index - Social Science & Humanities (CPCI-SSH), 1991-present; Book Citation Index - Social Sciences & Humanities (BKCI-SSH), 2005-present; Current Chemical Reactions (CCR-EXPANDED), 1985-present; Index Chemicus (IC), 1993-present. (2) NIH: (a) We excluded publications related to MDSCs that were supported by the NIH. (b) We limited the keyword search to studies that included MDSCs within the title or abstract. (3) ClinicalTrials.gov: (a) We excluded clinical trials that were

  18. Literature Review and Database of Relations Between Salinity and Aquatic Biota: Applications to Bowdoin National Wildlife Refuge, Montana

    Science.gov (United States)

    Gleason, Robert A.; Tangen, Brian A.; Laubhan, Murray K.; Finocchiaro, Raymond G.; Stamm, John F.

    2009-01-01

    Long-term accumulation of salts in wetlands at Bowdoin National Wildlife Refuge (NWR), Mont., has raised concern among wetland managers that increasing salinity may threaten plant and invertebrate communities that provide important habitat and food resources for migratory waterfowl. Currently, the U.S. Fish and Wildlife Service (USFWS) is evaluating various water management strategies to help maintain suitable ranges of salinity to sustain plant and invertebrate resources of importance to wildlife. To support this evaluation, the USFWS requested that the U.S. Geological Survey (USGS) provide information on salinity ranges of water and soil for common plants and invertebrates on Bowdoin NWR lands. To address this need, we conducted a search of the literature on occurrences of plants and invertebrates in relation to salinity and pH of the water and soil. The compiled literature was used to (1) provide a general overview of salinity concepts, (2) document published tolerances and adaptations of biota to salinity, (3) develop databases that the USFWS can use to summarize the range of reported salinity values associated with plant and invertebrate taxa, and (4) perform database summaries that describe reported salinity ranges associated with plants and invertebrates at Bowdoin NWR. The purpose of this report is to synthesize information to facilitate a better understanding of the ecological relations between salinity and flora and fauna when developing wetland management strategies. A primary focus of this report is to provide information to help evaluate and address salinity issues at Bowdoin NWR; however, the accompanying databases, as well as concepts and information discussed, are applicable to other areas or refuges. The accompanying databases include salinity values reported for 411 plant taxa and 330 invertebrate taxa. The databases are available in Microsoft Excel version 2007 (http://pubs.usgs.gov/sir/2009/5098/downloads/databases_21april2009.xls) and contain

  19. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  20. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  1. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  2. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  3. Enhancing Clinical Content and Race/Ethnicity Data in Statewide Hospital Administrative Databases: Obstacles Encountered, Strategies Adopted, and Lessons Learned.

    Science.gov (United States)

    Pine, Michael; Kowlessar, Niranjana M; Salemi, Jason L; Miyamura, Jill; Zingmond, David S; Katz, Nicole E; Schindler, Joe

    2015-08-01

    Eight grant teams used Agency for Healthcare Research and Quality infrastructure development research grants to enhance the clinical content of and improve race/ethnicity identifiers in statewide all-payer hospital administrative databases. Grantees faced common challenges, including recruiting data partners and ensuring their continued effective participation, acquiring and validating the accuracy and utility of new data elements, and linking data from multiple sources to create internally consistent enhanced administrative databases. Successful strategies to overcome these challenges included aggressively engaging with providers of critical sources of data, emphasizing potential benefits to participants, revising requirements to lessen burdens associated with participation, maintaining continuous communication with participants, being flexible when responding to participants' difficulties in meeting program requirements, and paying scrupulous attention to preparing data specifications and creating and implementing protocols for data auditing, validation, cleaning, editing, and linking. In addition to common challenges, grantees also had to contend with unique challenges from local environmental factors that shaped the strategies they adopted. The creation of enhanced administrative databases to support comparative effectiveness research is difficult, particularly in the face of numerous challenges with recruiting data partners such as competing demands on information technology resources. Excellent communication, flexibility, and attention to detail are essential ingredients in accomplishing this task. Additional research is needed to develop strategies for maintaining these databases when initial funding is exhausted. © Health Research and Educational Trust.

  4. Review on management of horticultural plant germplasm resources and construction of related database

    Directory of Open Access Journals (Sweden)

    Pan Jingxian

    2017-02-01

    Full Text Available The advances of databases on horticulture germplasm resources from China and abroad was briefly reviewed and the key technologies were discussed in details,especially in descriptors of data collection of germplasm resources. The prospective and challenges of databases were also discussed. It was evident that there was an urgent need to develop the databases of horticulture germplasm resources,with increasing diversity of germplasm,more user friendly and systematically access to the databases.

  5. A Methodolgy, Based on Analytical Modeling, for the Design of Parallel and Distributed Architectures for Relational Database Query Processors.

    Science.gov (United States)

    1987-12-01

    Application Programs Intelligent Disk Database Controller Manangement System Operating System Host .1’ I% Figure 2. Intelligent Disk Controller Application...8217. /- - • Database Control -% Manangement System Disk Data Controller Application Programs Operating Host I"" Figure 5. Processor-Per- Head data. Therefore, the...However. these ad- ditional properties have been proven in classical set and relation theory [75]. These additional properties are described here

  6. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  7. UnoViS: the MedIT public unobtrusive vital signs database.

    Science.gov (United States)

    Wartzek, Tobias; Czaplik, Michael; Antink, Christoph Hoog; Eilebrecht, Benjamin; Walocha, Rafael; Leonhardt, Steffen

    2015-01-01

    While PhysioNet is a large database for standard clinical vital signs measurements, such a database does not exist for unobtrusively measured signals. This inhibits progress in the vital area of signal processing for unobtrusive medical monitoring as not everybody owns the specific measurement systems to acquire signals. Furthermore, if no common database exists, a comparison between different signal processing approaches is not possible. This gap will be closed by our UnoViS database. It contains different recordings in various scenarios ranging from a clinical study to measurements obtained while driving a car. Currently, 145 records with a total of 16.2 h of measurement data is available, which are provided as MATLAB files or in the PhysioNet WFDB file format. In its initial state, only (multichannel) capacitive ECG and unobtrusive PPG signals are, together with a reference ECG, included. All ECG signals contain annotations by a peak detector and by a medical expert. A dataset from a clinical study contains further clinical annotations. Additionally, supplementary functions are provided, which simplify the usage of the database and thus the development and evaluation of new algorithms. The development of urgently needed methods for very robust parameter extraction or robust signal fusion in view of frequent severe motion artifacts in unobtrusive monitoring is now possible with the database.

  8. Authority Control and Linked Bibliographic Databases.

    Science.gov (United States)

    Clack, Doris H.

    1988-01-01

    Explores issues related to bibliographic database authority control, including the nature of standards, quality control, library cooperation, centralized and decentralized databases and authority control systems, and economic considerations. The implications of authority control for linking large scale databases are discussed. (18 references)…

  9. Ontology to relational database transformation for web application development and maintenance

    Science.gov (United States)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  10. Spatial distribution of clinical computer systems in primary care in England in 2016 and implications for primary care electronic medical record databases: a cross-sectional population study.

    Science.gov (United States)

    Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M

    2018-02-28

    UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  12. Conceptual considerations for CBM databases

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, E. P.; Aleksandrov, E. I.; Aleksandrov, I. N.; Filozova, I. A.; Ivanov, V. V.; Zrelov, P. V. [Lab. of Information Technologies, JINR, Dubna (Russian Federation); Friese, V.; Mueller, W. [GSI, Darmstadt (Germany)

    2014-07-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  13. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.

    2014-01-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  14. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  15. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  16. Databases and bookkeeping for HEP experiments

    International Nuclear Information System (INIS)

    Blobel, V.; Cnops, A.-M.; Fisher, S.M.

    1983-09-01

    The term database is explained as well as the requirements for data bases in High Energy physics (HEP). Also covered are the packages used in HEP, summary of user experience, database management systems, relational database management systems for HEP use and observations. (U.K.)

  17. A high performance, ad-hoc, fuzzy query processing system for relational databases

    Science.gov (United States)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  18. MammoGrid: a mammography database

    CERN Multimedia

    2002-01-01

    What would be the advantages if physicians around the world could gain access to a unique mammography database? The answer may come from MammoGrid, a three-year project under the Fifth Framework Programme of the EC. Led by CERN, MammoGrid involves the UK (the Universities of Oxford, Cambridge and the West of England, Bristol, plus the company Mirada Solutions of Oxford), and Italy (the Universities of Pisa and Sassari and the Hospitals in Udine and Torino). The aim of the project is, in light of emerging GRID technology, to develop a Europe-wide database of mammograms. The database will be used to investigate a set of important healthcare applications as well as the potential of the GRID to enable healthcare professionals throughout the EU to work together effectively. The contributions of the partners include building the GRID-database infrastructure, developing image processing and Computer Aided Detection techniques, and making the clinical evaluation. The first project meeting took place at CERN in Sept...

  19. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  20. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  2. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  3. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  4. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    Science.gov (United States)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  5. Extracting meronomy relations from domain-specific, textual corporate databases

    NARCIS (Netherlands)

    Ittoo, R.A.; Bouma, G.; Maruster, L.; Wortmann, J.C.; Hopfe, C.J.; Rezgui, Y.; Métais, E.; Preece, A.; Li, H.

    2010-01-01

    Various techniques for learning meronymy relationships from open-domain corpora exist. However, extracting meronymy relationships from domain-specific, textual corporate databases has been overlooked, despite numerous application opportunities particularly in domains like product development and/or

  6. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  7. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    Science.gov (United States)

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  8. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    Science.gov (United States)

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  9. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  10. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  11. Loss of job-related right to healthcare is associated with reduced quality and clinical outcomes of diabetic patients in Mexico.

    Science.gov (United States)

    Doubova, Svetlana V; Borja-Aburto, Víctor Hugo; Guerra-Y-Guerra, Germán; Salgado-de-Snyder, V Nelly; González-Block, Miguel Ángel

    2018-05-01

    The Mexican Institute of Social Security (IMSS) provides a package of health, economic and social benefits to workers employed in private firms within the formal labour market and to their economic dependants. Affiliates have a right to these benefits only while they remain contracted, thus posing a risk for the continuity of healthcare. This study evaluates the association between the time (in days) without the right to healthcare due to job loss in the formal labour market and the quality of healthcare and clinical outcomes among IMSS affiliates with Type 2 diabetes mellitus (T2DM). Retrospective cohort study 2013-2015. Six IMSS family medicine clinics (FMC) in Mexico City. T2DM patients (n = 27 217) affiliated with job-related health insurance and at least one consultation with a family doctor during 2013. IMSS affiliation department database and electronic health records and clinical laboratory databases. Quality of the processes (eight indicators) and outcomes (three indicators) of healthcare. The results indicated that losing IMSS right to healthcare is frequent, occurring to one-third of T2DM patients during the follow-up period. The time without the right to healthcare in the observed period was of 120 days on average and was associated with a 43.2% loss of quality of care and a 19.2% reduction in clinical outcomes of T2DM. Policies aimed at ensuring access and continuity of care, regardless of job status, are critical for improving the quality of processes and outcomes of healthcare for diabetic patients.

  12. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    Science.gov (United States)

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    Science.gov (United States)

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  14. Brain Stroke Detection by Microwaves Using Prior Information from Clinical Databases

    Directory of Open Access Journals (Sweden)

    Natalia Irishina

    2013-01-01

    Full Text Available Microwave tomographic imaging is an inexpensive, noninvasive modality of media dielectric properties reconstruction which can be utilized as a screening method in clinical applications such as breast cancer and brain stroke detection. For breast cancer detection, the iterative algorithm of structural inversion with level sets provides well-defined boundaries and incorporates an intrinsic regularization, which permits to discover small lesions. However, in case of brain lesion, the inverse problem is much more difficult due to the skull, which causes low microwave penetration and highly noisy data. In addition, cerebral liquid has dielectric properties similar to those of blood, which makes the inversion more complicated. Nevertheless, the contrast in the conductivity and permittivity values in this situation is significant due to blood high dielectric values compared to those of surrounding grey and white matter tissues. We show that using brain MRI images as prior information about brain's configuration, along with known brain dielectric properties, and the intrinsic regularization by structural inversion, allows successful and rapid stroke detection even in difficult cases. The method has been applied to 2D slices created from a database of 3D real MRI phantom images to effectively detect lesions larger than 2.5 × 10−2 m diameter.

  15. Draft secure medical database standard.

    Science.gov (United States)

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  16. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  17. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs

  18. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  19. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The Landscape of Clinical Trials Evaluating the Theranostic Role of PET Imaging in Oncology: Insights from an Analysis of ClinicalTrials.gov Database

    Science.gov (United States)

    Chen, Yu-Pei; Lv, Jia-Wei; Liu, Xu; Zhang, Yuan; Guo, Ying; Lin, Ai-Hua; Sun, Ying; Mao, Yan-Ping; Ma, Jun

    2017-01-01

    In the war on cancer marked by personalized medicine, positron emission tomography (PET)-based theranostic strategy is playing an increasingly important role. Well-designed clinical trials are of great significance for validating the PET applications and ensuring evidence-based cancer care. This study aimed to provide a comprehensive landscape of the characteristics of PET clinical trials using the substantial resource of ClinicalTrials.gov database. We identified 25,599 oncology trials registered with ClinicalTrials.gov in the last ten-year period (October 2005-September 2015). They were systematically reviewed to validate classification into 519 PET trials and 25,080 other oncology trials used for comparison. We found that PET trials were predominantly phase 1-2 studies (86.2%) and were more likely to be single-arm (78.9% vs. 57.9%, P oncology trials. Furthermore, PET trials were small in scale, generally enrolling fewer than 100 participants (20.3% vs. 25.7% for other oncology trials, P = 0.014), which might be too small to detect a significant theranostic effect. The funding support from industry or National Institutes of Health shrunk over time (both decreased by about 5%), and PET trials were more likely to be conducted in only one region lacking international collaboration (97.0% vs. 89.3% for other oncology trials, P oncology are not receiving the attention or efforts necessary to generate high-quality evidence. Advancing the clinical application of PET imaging will require a concerted effort to improve the quality of trials. PMID:28042342

  1. Systematization of clinical trials related to treatment of metabolic syndrome, 1980-2015.

    Science.gov (United States)

    Cardona Velásquez, Santiago; Guzmán Vivares, Laura; Cardona-Arias, Jaiberth Antonio

    2017-02-01

    Despite the clinical, epidemiological, and economic significance of metabolic syndrome, the profile of clinical trials on this disease is unknown. To characterize the clinical trials related to treatment of metabolic syndrome during the 1980-2015 period. Systematic review of the literature using an ex ante search protocol which followed the phases of the guide Preferred Reporting Items for Systematic Reviews and Meta-Analyses in four multidisciplinary databases with seven search strategies. Reproducibility and methodological quality of the studies were assessed. One hundred and six trials were included, most from the United States, Italy, and Spain, of which 63.2% evaluated interventions effective for several components of the syndrome such as diet (40.6%) or physical activity (22.6%). Other studies assessed drugs for a single factor such as hypertension (7.5%), hypertriglyceridemia (11.3%), or hyperglycemia (9.4%). Placebo was used as control in 54.7% of trials, and outcome measures included triglycerides (52.8%), HDL (48.1%), glucose (29.2%), BMI (33.0%), blood pressure (27.4%), waist circumference (26.4%), glycated hemoglobin (11.3%), and hip circumference (7.5%). It was shown that studies ob efficacy of treatment for metabolic syndrome are scarce and have mainly been conducted in the last five years and in high-income countries. Trials on interventions that affect three or more factors and assess several outcome measures are few, and lifestyle interventions (diet and physical activity) are highlighted as most important to impact on this multifactorial syndrome. Copyright © 2017 SEEN. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Epistemonikos: a free, relational, collaborative, multilingual database of health evidence.

    Science.gov (United States)

    Rada, Gabriel; Pérez, Daniel; Capurro, Daniel

    2013-01-01

    Epistemonikos (www.epistemonikos.org) is a free, multilingual database of the best available health evidence. This paper describes the design, development and implementation of the Epistemonikos project. Using several web technologies to store systematic reviews, their included articles, overviews of reviews and structured summaries, Epistemonikos is able to provide a simple and powerful search tool to access health evidence for sound decision making. Currently, Epistemonikos stores more than 115,000 unique documents and more than 100,000 relationships between documents. In addition, since its database is translated into 9 different languages, Epistemonikos ensures that non-English speaking decision-makers can access the best available evidence without language barriers.

  3. DBGC: A Database of Human Gastric Cancer

    Science.gov (United States)

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288

  4. Brug af en translationel database i en klinisk afdeling

    DEFF Research Database (Denmark)

    Højfeldt, Anne Dirks; Johnsen, Hans E; Bøgsted, Martin

    2010-01-01

    In haematology it is assumed that integrative analysis of global gene expression, protein and cell profiles as well as clinical data will lead to the development of new diagnostic, prognostic and predictive methods. A translational database system registering and combining all data and clinical...... observations about the patient is therefore needed. It is expected that along with automated prediction and prognosis tools, such a database system may have the potential to assist the development of new machine-based diagnostic decision-making processes. Udgivelsesdato: 2010-Jul-12...

  5. Development and validation of an extended database for yeast identification by MALDI-TOF MS in Argentina.

    Science.gov (United States)

    Taverna, Constanza Giselle; Mazza, Mariana; Bueno, Nadia Soledad; Alvarez, Christian; Amigot, Susana; Andreani, Mariana; Azula, Natalia; Barrios, Rubén; Fernández, Norma; Fox, Barbara; Guelfand, Liliana; Maldonado, Ivana; Murisengo, Omar Alejandro; Relloso, Silvia; Vivot, Matias; Davel, Graciela

    2018-05-11

    Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has revolutionized the identification of microorganisms in clinical laboratories because it is rapid, relatively simple to use, accurate, and can be used for a wide number of microorganisms. Several studies have demonstrated the utility of this technique in the identification of yeasts; however, its performance is usually improved by the extension of the database. Here we developed an in-house database of 143 strains belonging to 42 yeast species in the MALDI Biotyper platform, and we validated the extended database with 388 regional strains and 15 reference strains belonging to 55 yeast species. We also performed an intra- and interlaboratory study to assess reproducibility and analyzed the use of the cutoff values of 1.700 and 2.000 to correctly identify at species level. The creation of an in-house database that extended the manufacturer's database was successful in view of no incorrect identification was introduced. The best performance was observed by using the extended database and a cutoff value of 1.700 with a sensitivity of .94 and specificity of .96. A reproducibility study showed utility to detect deviations and could be used for external quality control. The extended database was able to differentiate closely related species and it has potential in distinguishing the molecular genotypes of Cryptococcus neoformans and Cryptococcus gattii.

  6. The Danish National Database for Obstructive Sleep Apnea

    DEFF Research Database (Denmark)

    Jennum, Poul Jørgen; Larsen, Preben; Cerqueira, Charlotte

    2016-01-01

    AIM: The aim of the Danish National Database for Obstructive Sleep Apnea (NDOSA) was to evaluate the clinical quality (diagnostic, treatment, and management) for obstructive sleep apnea and obesity hypoventilation syndrome in Denmark using a real-time national database reporting to the Danish...... departments was involved in the management of sleep apnea in Denmark for the purpose of quality improvement. CONCLUSION: The NDOSA has proven to be a real-time national database using diagnostic and treatment procedures reported to the Danish National Patient Registry....

  7. Security aspects of database systems implementation

    OpenAIRE

    Pokorný, Tomáš

    2009-01-01

    The aim of this thesis is to provide a comprehensive overview of database systems security. Reader is introduced into the basis of information security and its development. Following chapter defines a concept of database system security using ISO/IEC 27000 Standard. The findings from this chapter form a complex list of requirements on database security. One chapter also deals with legal aspects of this domain. Second part of this thesis offers a comparison of four object-relational database s...

  8. Database Systems - Present and Future

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  9. A tuberculosis biomarker database: the key to novel TB diagnostics

    Directory of Open Access Journals (Sweden)

    Seda Yerlikaya

    2017-03-01

    Full Text Available New diagnostic innovations for tuberculosis (TB, including point-of-care solutions, are critical to reach the goals of the End TB Strategy. However, despite decades of research, numerous reports on new biomarker candidates, and significant investment, no well-performing, simple and rapid TB diagnostic test is yet available on the market, and the search for accurate, non-DNA biomarkers remains a priority. To help overcome this ‘biomarker pipeline problem’, FIND and partners are working on the development of a well-curated and user-friendly TB biomarker database. The web-based database will enable the dynamic tracking of evidence surrounding biomarker candidates in relation to target product profiles (TPPs for needed TB diagnostics. It will be able to accommodate raw datasets and facilitate the verification of promising biomarker candidates and the identification of novel biomarker combinations. As such, the database will simplify data and knowledge sharing, empower collaboration, help in the coordination of efforts and allocation of resources, streamline the verification and validation of biomarker candidates, and ultimately lead to an accelerated translation into clinically useful tools.

  10. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  11. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  12. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  13. E-MSD: the European Bioinformatics Institute Macromolecular Structure Database.

    Science.gov (United States)

    Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W

    2003-01-01

    The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.

  14. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  15. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  16. Relative Impact of Print and Database Products on Database Producer Expenses and Income--A Follow-Up.

    Science.gov (United States)

    Williams, Martha E.

    1982-01-01

    Provides update to 13-year analysis of finances of major database producer noting actions taken to improve finances (decrease expenses, increase efficiency, develop new products, market strategies and services, change pricing scheme, omit print products, increase prices) and consequences of actions (revenue increase, connect hour increase). Five…

  17. Attenuation relation for strong motion in Eastern Java based on appropriate database and method

    Science.gov (United States)

    Mahendra, Rian; Rohadi, Supriyanto; Rudyanto, Ariska

    2017-07-01

    The selection and determination of attenuation relation has become important for seismic hazard assessment in active seismic region. This research initially constructs the appropriate strong motion database, including site condition and type of the earthquake. The data set consisted of large number earthquakes of 5 ≤ Mw ≤ 9 and distance less than 500 km that occurred around Java from 2009 until 2016. The location and depth of earthquake are being relocated using double difference method to improve the quality of database. Strong motion data from twelve BMKG's accelerographs which are located in east Java is used. The site condition is known by using dominant period and Vs30. The type of earthquake is classified into crustal earthquake, interface, and intraslab based on slab geometry analysis. A total of 10 Ground Motion Prediction Equations (GMPEs) are tested using Likelihood (Scherbaum et al., 2004) and Euclidean Distance Ranking method (Kale and Akkar, 2012) with the associated database. The evaluation of these methods lead to a set of GMPEs that can be applied for seismic hazard in East Java where the strong motion data is collected. The result of these methods found that there is still high deviation of GMPEs, so the writer modified some GMPEs using inversion method. Validation was performed by analysing the attenuation curve of the selected GMPE and observation data in period 2015 up to 2016. The results show that the selected GMPE is suitable for estimated PGA value in East Java.

  18. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  19. Acromegaly at diagnosis in 3173 patients from the Liège Acromegaly Survey (LAS) Database

    Science.gov (United States)

    Petrossians, Patrick; Daly, Adrian F; Natchev, Emil; Maione, Luigi; Blijdorp, Karin; Sahnoun-Fathallah, Mona; Auriemma, Renata; Diallo, Alpha M; Hulting, Anna-Lena; Ferone, Diego; Hana, Vaclav; Filipponi, Silvia; Sievers, Caroline; Nogueira, Claudia; Fajardo-Montañana, Carmen; Carvalho, Davide; Hana, Vaclav; Stalla, Günter K; Jaffrain-Réa, Marie-Lise; Delemer, Brigitte; Colao, Annamaria; Brue, Thierry; Neggers, Sebastian J C M M; Zacharieva, Sabina; Chanson, Philippe

    2017-01-01

    Acromegaly is a rare disorder caused by chronic growth hormone (GH) hypersecretion. While diagnostic and therapeutic methods have advanced, little information exists on trends in acromegaly characteristics over time. The Liège Acromegaly Survey (LAS) Database, a relational database, is designed to assess the profile of acromegaly patients at diagnosis and during long-term follow-up at multiple treatment centers. The following results were obtained at diagnosis. The study population consisted of 3173 acromegaly patients from ten countries; 54.5% were female. Males were significantly younger at diagnosis than females (43.5 vs 46.4 years; P 3100 patients is the largest international acromegaly database and shows clinically relevant trends in the characteristics of acromegaly at diagnosis. PMID:28733467

  20. Acromegaly at diagnosis in 3173 patients from the Liège Acromegaly Survey (LAS) Database.

    Science.gov (United States)

    Petrossians, Patrick; Daly, Adrian F; Natchev, Emil; Maione, Luigi; Blijdorp, Karin; Sahnoun-Fathallah, Mona; Auriemma, Renata; Diallo, Alpha M; Hulting, Anna-Lena; Ferone, Diego; Hana, Vaclav; Filipponi, Silvia; Sievers, Caroline; Nogueira, Claudia; Fajardo-Montañana, Carmen; Carvalho, Davide; Hana, Vaclav; Stalla, Günter K; Jaffrain-Réa, Marie-Lise; Delemer, Brigitte; Colao, Annamaria; Brue, Thierry; Neggers, Sebastian J C M M; Zacharieva, Sabina; Chanson, Philippe; Beckers, Albert

    2017-10-01

    Acromegaly is a rare disorder caused by chronic growth hormone (GH) hypersecretion. While diagnostic and therapeutic methods have advanced, little information exists on trends in acromegaly characteristics over time. The Liège Acromegaly Survey (LAS) Database , a relational database, is designed to assess the profile of acromegaly patients at diagnosis and during long-term follow-up at multiple treatment centers. The following results were obtained at diagnosis. The study population consisted of 3173 acromegaly patients from ten countries; 54.5% were female. Males were significantly younger at diagnosis than females (43.5 vs 46.4 years; P  3100 patients is the largest international acromegaly database and shows clinically relevant trends in the characteristics of acromegaly at diagnosis. © 2017 The authors.

  1. Usage of the Jess Engine, Rules and Ontology to Query a Relational Database

    Science.gov (United States)

    Bak, Jaroslaw; Jedrzejek, Czeslaw; Falkowski, Maciej

    We present a prototypical implementation of a library tool, the Semantic Data Library (SDL), which integrates the Jess (Java Expert System Shell) engine, rules and ontology to query a relational database. The tool extends functionalities of previous OWL2Jess with SWRL implementations and takes full advantage of the Jess engine, by separating forward and backward reasoning. The optimization of integration of all these technologies is an advancement over previous tools. We discuss the complexity of the query algorithm. As a demonstration of capability of the SDL library, we execute queries using crime ontology which is being developed in the Polish PPBW project.

  2. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  3. Mining Views : database views for data mining

    NARCIS (Netherlands)

    Blockeel, H.; Calders, T.; Fromont, É.; Goethals, B.; Prado, A.; Nijssen, S.; De Raedt, L.

    2007-01-01

    We propose a relational database model towards the integration of data mining into relational database systems, based on the so called virtual mining views. We show that several types of patterns and models over the data, such as itemsets, association rules, decision trees and clusterings, can be

  4. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  5. RaMP: A Comprehensive Relational Database of Metabolomics Pathways for Pathway Enrichment Analysis of Genes and Metabolites.

    Science.gov (United States)

    Zhang, Bofei; Hu, Senyang; Baskin, Elizabeth; Patt, Andrew; Siddiqui, Jalal K; Mathé, Ewy A

    2018-02-22

    The value of metabolomics in translational research is undeniable, and metabolomics data are increasingly generated in large cohorts. The functional interpretation of disease-associated metabolites though is difficult, and the biological mechanisms that underlie cell type or disease-specific metabolomics profiles are oftentimes unknown. To help fully exploit metabolomics data and to aid in its interpretation, analysis of metabolomics data with other complementary omics data, including transcriptomics, is helpful. To facilitate such analyses at a pathway level, we have developed RaMP (Relational database of Metabolomics Pathways), which combines biological pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, WikiPathways, and the Human Metabolome DataBase (HMDB). To the best of our knowledge, an off-the-shelf, public database that maps genes and metabolites to biochemical/disease pathways and can readily be integrated into other existing software is currently lacking. For consistent and comprehensive analysis, RaMP enables batch and complex queries (e.g., list all metabolites involved in glycolysis and lung cancer), can readily be integrated into pathway analysis tools, and supports pathway overrepresentation analysis given a list of genes and/or metabolites of interest. For usability, we have developed a RaMP R package (https://github.com/Mathelab/RaMP-DB), including a user-friendly RShiny web application, that supports basic simple and batch queries, pathway overrepresentation analysis given a list of genes or metabolites of interest, and network visualization of gene-metabolite relationships. The package also includes the raw database file (mysql dump), thereby providing a stand-alone downloadable framework for public use and integration with other tools. In addition, the Python code needed to recreate the database on another system is also publicly available (https://github.com/Mathelab/RaMP-BackEnd). Updates for databases in RaMP will be

  6. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  7. Development of a Relational Database for Learning Management Systems

    Science.gov (United States)

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  8. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    Science.gov (United States)

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  9. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  10. Mining Views : database views for data mining

    NARCIS (Netherlands)

    Blockeel, H.; Calders, T.; Fromont, É.; Goethals, B.; Prado, A.

    2008-01-01

    We present a system towards the integration of data mining into relational databases. To this end, a relational database model is proposed, based on the so called virtual mining views. We show that several types of patterns and models over the data, such as itemsets, association rules and decision

  11. Clinical outcomes in low risk coronary artery disease patients treated with different limus-based drug-eluting stents--a nationwide retrospective cohort study using insurance claims database.

    Directory of Open Access Journals (Sweden)

    Chao-Lun Lai

    Full Text Available The clinical outcomes of different limus-based drug-eluting stents (DES in a real-world setting have not been well defined. The aim of this study was to investigate the clinical outcomes of three different limus-based DES, namely sirolimus-eluting stent (SES, Endeavor zotarolimus-eluting stent (E-ZES and everolimus-eluting stent (EES, using a national insurance claims database. We identified all patients who received implantation of single SES, E-ZES or EES between January 1, 2007 and December 31, 2009 from the National Health Insurance claims database, Taiwan. Follow-up was through December 31, 2011 for all selected clinical outcomes. The primary end-point was all-cause mortality. Secondary end-points included acute coronary events, heart failure needing hospitalization, and cerebrovascular disease. Cox regression model adjusting for baseline characteristics was used to compare the relative risks of different outcomes among the three different limus-based DES. Totally, 6584 patients were evaluated (n=2142 for SES, n=3445 for E-ZES, and n=997 for EES. After adjusting for baseline characteristics, we found no statistically significant difference in the risk of all-cause mortality in three DES groups (adjusted hazard ratio [HR]: 1.14, 95% confidence interval [CI]: 0.94-1.38, p=0.20 in E-ZES group compared with SES group; adjusted HR: 0.77, 95% CI: 0.54-1.10, p=0.15 in EES group compared with SES group. Similarly, we found no difference in the three stent groups in risks of acute coronary events, heart failure needing hospitalization, and cerebrovascular disease. In conclusion, we observed no difference in all-cause mortality, acute coronary events, heart failure needing hospitalization, and cerebrovascular disease in patients treated with SES, E-ZES, and EES in a real-world population-based setting in Taiwan.

  12. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    Science.gov (United States)

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  13. On the selection of Secondary Indices in Relational Databases

    NARCIS (Netherlands)

    Choenni, R.S.; Blanken, Henk; Chang, Thiel

    1993-01-01

    An important problem in the physical design of databases is the selection of secondary indices. In general, this problem cannot be solved in an optimal way due to the complexity of the selection process. Often use is made of heuristics such as the well-known ADD and DROP algorithms. In this paper it

  14. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  15. Establishment of a regional Danish database for patients with a stoma

    DEFF Research Database (Denmark)

    Danielsen, Anne Kjærgaard; Christensen, Bo Marcel; Mortensen, Jann

    2015-01-01

    AIM: To present the Danish Stoma Database Capital Region with clinical variables related to stoma creation including colostomy, ileostomy and urostomy. METHOD: The stomatherapists in the Capital Region of Denmark developed a database covering patient identifiers, interventions, conditions, short......-term outcome, long-term outcome and known major confounders. The completeness of data was validated against the Danish National Patient Register. RESULTS: In 2013, five hospitals included data from 1123 patients who were registered during the year. The types of stomas formed from 2007 to 2013 showed...... a variation reflecting the subspecialization and surgical techniques in the centres. Between 92 and 94% of patients agreed to participate in the standard programme aimed at handling of the stoma and more than 88% of patients having planned surgery had the stoma site marked pre-operatively. CONCLUSION...

  16. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  17. Join Operations in Temporal Databases

    DEFF Research Database (Denmark)

    Gao, D.; Jensen, Christian Søndergaard; Snodgrass, R.T.

    2005-01-01

    Joins are arguably the most important relational operators. Poor implementations are tantamount to computing the Cartesian product of the input relations. In a temporal database, the problem is more acute for two reasons. First, conventional techniques are designed for the evaluation of joins...... with equality predicates rather than the inequality predicates prevalent in valid-time queries. Second, the presence of temporally varying data dramatically increases the size of a database. These factors indicate that specialized techniques are needed to efficiently evaluate temporal joins. We address...... this need for efficient join evaluation in temporal databases. Our purpose is twofold. We first survey all previously proposed temporal join operators. While many temporal join operators have been defined in previous work, this work has been done largely in isolation from competing proposals, with little...

  18. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  19. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  20. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...