WorldWideScience

Sample records for psycinfo database record

  1. Identifying randomized controlled trials of cognitive therapy for depression: comparing the efficiency of Embase, Medline and PsycINFO bibliographic databases.

    Science.gov (United States)

    Watson, R J; Richardson, P H

    1999-12-01

    This study sought to compare the sensitivity and precision of Embase, Medline and PsycINFO bibliographic database searches for randomized controlled trials of cognitive therapy for depression. Searches in each database combined with a hand search in five selected journals formed the total pool against which each search was assessed. Sensitivities of standard searches (index terms only) were 68%, 84% and 38% in Embase, Medline and PsycINFO respectively. Sensitivities of expert searches (index and free text terms) were 76%, 97% and 65% for Embase, Medline and PsycINFO respectively. Medline appears to be the most efficient at identifying articles describing psychological treatment evaluation.

  2. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  3. Rule-based deduplication of article records from bibliographic databases.

    Science.gov (United States)

    Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R

    2014-01-01

    We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments.

  4. Facebook: A Bibliographic Analysis of the PsycINFO Database

    Science.gov (United States)

    Piotrowski, Chris

    2012-01-01

    With the advent of rapidly emerging technologies, researchers need to be cognizant of developments and applications in the area of social media as a topic of investigatory interest. To date, scholarly research on the topic of Facebook, a ubiquitous social media site, is rather extensive. This study on Facebook, using a bibliographic content…

  5. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  6. Validation of Nordic dairy cattle disease recording databases

    DEFF Research Database (Denmark)

    Lind, Ann-Kristina; Thomsen, Peter Thorup; Ersbøll, Annette Kjær;

    2012-01-01

    The Nordic countries Denmark (DK), Finland (FIN), Norway (NO) and Sweden (SE) all have unique national databases holding the disease records of dairy cows. The objective of this study was to estimate and compare completeness for locomotor disorders in the four Nordic national databases. Completen......The Nordic countries Denmark (DK), Finland (FIN), Norway (NO) and Sweden (SE) all have unique national databases holding the disease records of dairy cows. The objective of this study was to estimate and compare completeness for locomotor disorders in the four Nordic national databases....... Completeness figures for farmer-recorded disease events were calculated on two different levels: the first refers to disease events that were observed on the farm regardless of whether a veterinarian had been involved (FARMER); the second refers to farmer records of cases attended by a veterinarian, i......-month periods in 2008 these farmers recorded the disease events they observed on the farm. Data from the four national databases were extracted in May 2009. The two data sources, farmer recordings and national databases, were managed in a comparable way in all four countries, and common diagnostic codes...

  7. Integrating Multi-Source Web Records into Relational Database

    Institute of Scientific and Technical Information of China (English)

    HUANG Jianbin; JI Hongbing; SUN Heli

    2006-01-01

    How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learning of labeled samples and unlabeled database records in order to reduce the dependence on tediously hand-labeled training data. The proposed model was used to solve the problem of schema matching between data source schema and database schema. Experimental results using a large number of Web pages from diverse domains show the novel approach's effectiveness.

  8. The New Zealand Tsunami Database: historical and modern records

    Science.gov (United States)

    Barberopoulou, A.; Downes, G. L.; Cochran, U. A.; Clark, K.; Scheele, F.

    2016-12-01

    A database of historical (pre-instrumental) and modern (instrumentally recorded)tsunamis that have impacted or been observed in New Zealand has been compiled andpublished online. New Zealand's tectonic setting, astride an obliquely convergenttectonic boundary on the Pacific Rim, means that it is vulnerable to local, regional andcircum-Pacific tsunamis. Despite New Zealand's comparatively short written historicalrecord of c. 200 years there is a wealth of information about the impact of past tsunamis.The New Zealand Tsunami Database currently has 800+ entries that describe >50 highvaliditytsunamis. Sources of historical information include witness reports recorded indiaries, notes, newspapers, books, and photographs. Information on recent events comesfrom tide gauges and other instrumental recordings such as DART® buoys, and media ofgreater variety, for example, video and online surveys. The New Zealand TsunamiDatabase is an ongoing project with information added as further historical records cometo light. Modern tsunamis are also added to the database once the relevant data for anevent has been collated and edited. This paper briefly overviews the procedures and toolsused in the recording and analysis of New Zealand's historical tsunamis, with emphasison database content.

  9. Matching prosthetics order records in VA National Prosthetics Patient Database to healthcare utilization databases.

    Science.gov (United States)

    Smith, Mark W; Su, Pon; Phibbs, Ciaran S

    2010-01-01

    The National Prosthetics Patient Database (NPPD) is the national Department of Veterans Affairs (VA) dataset that records characteristics of individual prosthetic and assistive devices. It remains unknown how well NPPD records can be matched to encounter records for the same individuals in major VA utilization databases. We compared the count of prosthetics records in the NPPD with the count of prosthetics-related procedures for the same individuals recorded in major VA utilization databases. We then attempted to match the NPPD records to the utilization records by person and date. In general, 40% to 60% of the NPPD records could be matched to outpatient utilization records within a 14-day window around the NPPD dataset entry date. Match rates for inpatient data were lower: 10% to 16% within a 14-day window. The NPPD will be particularly important for studies of certain veteran groups, such as those with spinal cord injury or blast-related polytraumatic injury. Health services researchers should use both the NPPD and utilization databases to develop a full understanding of prosthetics use by individual patients.

  10. Record Linkage system in a complex relational database - MINPHIS example.

    Science.gov (United States)

    Achimugu, Philip; Soriyan, Abimbola; Oluwagbemi, Oluwatolani; Ajayi, Anu

    2010-01-01

    In the health sector, record linkage is of paramount importance as clinical data can be distributed across different data repositories leading to duplication. Record Linkage is the process of tracking duplicate records that actually refers to the same entity. This paper proposes a fast and efficient method for duplicates detection within the healthcare domain. The first step is to standardize the data in the database using SQL. The second is to match similar pair records, and third step is to organize records into match and non-match status. The system was developed in Unified Modeling Language and Java. In the batch analysis of 31, 177 "supposedly" distinct identities, our method isolates 25, 117 true unique records and 6, 060 suspected duplicates using a healthcare system called MINPHIS (Made in Nigeria Primary Healthcare Information System) as the test bed.

  11. Semantic models in medical record data-bases.

    Science.gov (United States)

    Cerutti, S

    1980-01-01

    A great effort has been recently made in the area of data-base design in a number of application fields (banking, insurance, travel, etc.). Yet, it is the current experience of computer scientists in the medical field that medical record information-processing requires less rigid and more complete definition of data-base specifications for a much more heterogeneous set of data, for different users who have different aims. Hence, it is important to state that the data-base in the medical field ought to be a model of the environment for which it was created, rather than just a collection of data. New more powerful and more flexible data-base models are being now designed, particularly in the USA, where the current trend in medicine is to implement, in the same structure, the connection among more different and specific users and the data-base (for administrative aims, medical care control, treatments, statistical and epidemiological results, etc.). In such a way the single users are able to talk with the data-base without interfering with one another. The present paper outlines that this multi-purpose flexibility can be achieved by improving mainly the capabilities of the data-base model. This concept allows the creation of procedures of semantic integrity control which will certainly have in the future a dramatic impact on important management features, starting from data-quality checking and non-physiological state detections, as far as more medical-oriented procedures like drug interactions, record surveillance and medical care review. That is especially true when a large amount of data are to be processed and the classical hierarchical and network data models are no longer sufficient for developing satisfactory and reliable automatic procedures. In this regard, particular emphasis will be dedicated to the relational model and, at the highest level, to the same semantic data model.

  12. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  13. FINDbase: A relational database recording frequencies of genetic defects leading to inherited disorders worldwide

    NARCIS (Netherlands)

    S. van Baal (Sjozef); P. Kaimakis (Polynikis); M. Phommarinh (Manyphong); D. Koumbi (Daphne); H. Cuppens (Harry); F. Riccardino (Francesca); M. Macek (Milan MI); C.R. Scriver (Charles); G.P. Patrinos (George)

    2007-01-01

    textabstractFrequency of INherited Disorders database (FINDbase) (http://www.findbase.org) is a relational database, derived from the ETHNOS software, recording frequencies of causative mutations leading to inherited disorders worldwide. Database records include the population and ethnic group, the

  14. FINDbase: A relational database recording frequencies of genetic defects leading to inherited disorders worldwide

    NARCIS (Netherlands)

    S. van Baal (Sjozef); P. Kaimakis (Polynikis); M. Phommarinh (Manyphong); D. Koumbi (Daphne); H. Cuppens (Harry); F. Riccardino (Francesca); M. Macek (Milan MI); C.R. Scriver (Charles); G.P. Patrinos (George)

    2007-01-01

    textabstractFrequency of INherited Disorders database (FINDbase) (http://www.findbase.org) is a relational database, derived from the ETHNOS software, recording frequencies of causative mutations leading to inherited disorders worldwide. Database records include the population and ethnic group, the

  15. Digitized Database of Old Seismograms Recorder in Romania

    Science.gov (United States)

    Paulescu, Daniel; Rogozea, Maria; Popa, Mihaela; Radulian, Mircea

    2016-08-01

    The aim of this paper is to describe a managing system for a unique Romanian database of historical seismograms and complementary documentation (metadata) and its dissemination and analysis procedure. For this study, 5188 historical seismograms recorded between 1903 and 1957 by the Romanian seismological observatories (Bucharest-Filaret, Focşani, Bacău, Vrincioaia, Câmpulung-Muscel, Iaşi) were used. In order to reconsider the historical instrumental data, the analog seismograms are converted to digital images and digital waveforms (digitization/ vectorialisation). First, we applied a careful scanning procedure of the seismograms and related material (seismic bulletins, station books, etc.). In a next step, the high resolution scanned seismograms will be processed to obtain the digital/numeric waveforms. We used a Colortrac Smartlf Cx40 scanner which provides images in TIFF or JPG format. For digitization the algorithm Teseo2 developed by the National Institute of Geophysics and Volcanology in Rome (Italy), within the framework of the SISMOS Project, will be used.

  16. Digitized Database of Old Seismograms Recorder in Romania

    Directory of Open Access Journals (Sweden)

    Paulescu Daniel

    2016-08-01

    Full Text Available The aim of this paper is to describe a managing system for a unique Romanian database of historical seismograms and complementary documentation (metadata and its dissemination and analysis procedure. For this study, 5188 historical seismograms recorded between 1903 and 1957 by the Romanian seismological observatories (Bucharest-Filaret, Focşani, Bacău, Vrincioaia, Câmpulung-Muscel, Iaşi were used. In order to reconsider the historical instrumental data, the analog seismograms are converted to digital images and digital waveforms (digitization/vectorialisation. First, we applied a careful scanning procedure of the seismograms and related material (seismic bulletins, station books, etc.. In a next step, the high resolution scanned seismograms will be processed to obtain the digital/numeric waveforms. We used a Colortrac Smartlf Cx40 scanner which provides images in TIFF or JPG format. For digitization the algorithm Teseo2 developed by the National Institute of Geophysics and Volcanology in Rome (Italy, within the framework of the SISMOS Project, will be used.

  17. An electronic health record-enabled obesity database

    Directory of Open Access Journals (Sweden)

    Wood G

    2012-05-01

    Full Text Available Abstract Background The effectiveness of weight loss therapies is commonly measured using body mass index and other obesity-related variables. Although these data are often stored in electronic health records (EHRs and potentially very accessible, few studies on obesity and weight loss have used data derived from EHRs. We developed processes for obtaining data from the EHR in order to construct a database on patients undergoing Roux-en-Y gastric bypass (RYGB surgery. Methods Clinical data obtained as part of standard of care in a bariatric surgery program at an integrated health delivery system were extracted from the EHR and deposited into a data warehouse. Data files were extracted, cleaned, and stored in research datasets. To illustrate the utility of the data, Kaplan-Meier analysis was used to estimate length of post-operative follow-up. Results Demographic, laboratory, medication, co-morbidity, and survey data were obtained from 2028 patients who had undergone RYGB at the same institution since 2004. Pre-and post-operative diagnostic and prescribing information were available on all patients, while survey laboratory data were available on a majority of patients. The number of patients with post-operative laboratory test results varied by test. Based on Kaplan-Meier estimates, over 74% of patients had post-operative weight data available at 4 years. Conclusion A variety of EHR-derived data related to obesity can be efficiently obtained and used to study important outcomes following RYGB.

  18. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  19. Records for Electronic Databases in the Online Catalog at Middle Tennessee State University

    Science.gov (United States)

    Geckle, Beverly J.; Pozzebon, Mary Ellen; Williams, Jo

    2008-01-01

    This article recounts a project at the Middle Tennessee State University library to include records for electronic databases in the online catalog. Although electronic databases are accessible via the library's Databases A-Z list and related subject guides, cataloging these resources also provides access via the online catalog, allowing more of…

  20. Archive and Database as Metaphor: Theorizing the Historical Record

    Science.gov (United States)

    Manoff, Marlene

    2010-01-01

    Digital media increase the visibility and presence of the past while also reshaping our sense of history. We have extraordinary access to digital versions of books, journals, film, television, music, art and popular culture from earlier eras. New theoretical formulations of database and archive provide ways to think creatively about these changes…

  1. DANBIO-powerful research database and electronic patient record

    DEFF Research Database (Denmark)

    Hetland, Merete Lund

    2011-01-01

    The nationwide DANBIO registry has been designed to capture operational clinical data as part of routine clinical care. At the same time, it provides a powerful research database. This article reviews the DANBIO registry with focus on problems and solutions of design, funding and linkage, provides...... is based on open-source software. Via a unique personal identification code, linkage with various national registers is possible for research purposes. Since the year 2000, more than 10,000 patients have been included. The main focus of research has been on treatment efficacy and drug survival. Compared...... as an electronic patient 'chronicle' in routine care, and at the same time provides a powerful research database....

  2. DANBIO-powerful research database and electronic patient record

    DEFF Research Database (Denmark)

    Hetland, Merete Lund

    2011-01-01

    an overview of the research outcome and presents the cohorts of RA patients. The registry, which is approved as a national quality registry, includes patients with RA, PsA and AS, who are followed longitudinally. Data are captured electronically from the source (patients and health personnel). The IT platform...... as an electronic patient 'chronicle' in routine care, and at the same time provides a powerful research database....

  3. Version based spatial record management techniques for spatial database management system

    Institute of Scientific and Technical Information of China (English)

    KIM Ho-seok; KIM Hee-taek; KIM Myung-keun; BAE Hae-young

    2004-01-01

    The search operation of spatial data was a principal operation in existent spatial database management system, but the update operation of spatial data such as tracking are occurring frequently in the spatial database management system recently. So, necessity of concurrency improvement among transactions is increasing. In general database management system, many techniques have been studied to solve concurrency problem of transaction. Among them, multi-version algorithm does to minimize interference among transactions. However, to apply existent multi-version algorithm to improve concurrency of transaction to spatial database management system, the waste of storage happens because it must store entire version for spatial record even if only aspatial data of spatial record is changed. This paper has proposed the record management techniques to manage separating aspatial data version and spatial data version to decrease waste of storage for record version and improve concurrency among transactions.

  4. Historical seismometry database project: A comprehensive relational database for historical seismic records

    Science.gov (United States)

    Bono, Andrea

    2007-01-01

    The recovery and preservation of the patrimony made of the instrumental registrations regarding the historical earthquakes is with no doubt a subject of great interest. This attention, besides being purely historical, must necessarily be also scientific. In fact, the availability of a great amount of parametric information on the seismic activity in a given area is a doubtless help to the seismologic researcher's activities. In this article the project of the Sismos group of the National Institute of Geophysics and Volcanology of Rome new database is presented. In the structure of the new scheme the matured experience of five years of activity is summarized. We consider it useful for those who are approaching to "recovery and reprocess" computer based facilities. In the past years several attempts on Italian seismicity have followed each other. It has almost never been real databases. Some of them have had positive success because they were well considered and organized. In others it was limited in supplying lists of events with their relative hypocentral standards. What makes this project more interesting compared to the previous work is the completeness and the generality of the managed information. For example, it will be possible to view the hypocentral information regarding a given historical earthquake; it will be possible to research the seismograms in raster, digital or digitalized format, the information on times of arrival of the phases in the various stations, the instrumental standards and so on. The relational modern logic on which the archive is based, allows the carrying out of all these operations with little effort. The database described below will completely substitute Sismos' current data bank. Some of the organizational principles of this work are similar to those that inspire the database for the real-time monitoring of the seismicity in use in the principal offices of international research. A modern planning logic in a distinctly historical

  5. Comparing records to understand past rapid climate change: An INTIMATE database update

    Science.gov (United States)

    Kearney, Rebecca; Bronk Ramsey, Christopher; Staff, Richard A.; Albert, Paul G.

    2017-04-01

    Integrating multi-proxy records from ice, terrestrial and marine records enhances the understanding of the temporal and spatial variation of past rapid climatic changes globally. By handling these records on their own individual timescales and linking them through known chronological relationships (e.g. tephra, 10Be and 14C), regional comparisons can be made for these past climatic events. Furthermore, the use of time-transfer functions enables the chronological uncertainties between different archives to be quantified. The chronological database devised by the working group 1 (WG1) of INTIMATE, exclusively uses this methodology to provide a means to visualise and compare palaeoclimate records. Development of this database is ongoing, with numerous additional records being added to the database with a particular focus on European archives spanning the Late Glacial period. Here we present a new phase of data collection. Through selected cases study sites across Europe, we aim to illustrate the database as a novel tool in understanding spatial and temporal variations in rapid climatic change. Preliminary results allow questions such as time transgression and regional expressions of rapid climate change to be investigated. The development of this database will continue through additional input of raw climate proxy data, linking to other relevant databases (e.g. Fossil Pollen Database) and providing output data that can be analysed in the statistical programming language of R. A major goal of this work to is not only provide a detailed database, but allow researchers to integrate their own climate proxy data with that on the database.

  6. FINDbase: a relational database recording frequencies of genetic defects leading to inherited disorders worldwide.

    Science.gov (United States)

    van Baal, Sjozef; Kaimakis, Polynikis; Phommarinh, Manyphong; Koumbi, Daphne; Cuppens, Harry; Riccardino, Francesca; Macek, Milan; Scriver, Charles R; Patrinos, George P

    2007-01-01

    Frequency of INherited Disorders database (FINDbase) (http://www.findbase.org) is a relational database, derived from the ETHNOS software, recording frequencies of causative mutations leading to inherited disorders worldwide. Database records include the population and ethnic group, the disorder name and the related gene, accompanied by links to any corresponding locus-specific mutation database, to the respective Online Mendelian Inheritance in Man entries and the mutation together with its frequency in that population. The initial information is derived from the published literature, locus-specific databases and genetic disease consortia. FINDbase offers a user-friendly query interface, providing instant access to the list and frequencies of the different mutations. Query outputs can be either in a table or graphical format, accompanied by reference(s) on the data source. Registered users from three different groups, namely administrator, national coordinator and curator, are responsible for database curation and/or data entry/correction online via a password-protected interface. Databaseaccess is free of charge and there are no registration requirements for data querying. FINDbase provides a simple, web-based system for population-based mutation data collection and retrieval and can serve not only as a valuable online tool for molecular genetic testing of inherited disorders but also as a non-profit model for sustainable database funding, in the form of a 'database-journal'.

  7. Literature consistency of bioinformatics sequence databases is effective for assessing record quality.

    Science.gov (United States)

    Bouadjenek, Mohamed Reda; Verspoor, Karin; Zobel, Justin

    2017-01-01

    Bioinformatics sequence databases such as Genbank or UniProt contain hundreds of millions of records of genomic data. These records are derived from direct submissions from individual laboratories, as well as from bulk submissions from large-scale sequencing centres; their diversity and scale means that they suffer from a range of data quality issues including errors, discrepancies, redundancies, ambiguities, incompleteness and inconsistencies with the published literature. In this work, we seek to investigate and analyze the data quality of sequence databases from the perspective of a curator, who must detect anomalous and suspicious records. Specifically, we emphasize the detection of inconsistent records with respect to the literature. Focusing on GenBank, we propose a set of 24 quality indicators, which are based on treating a record as a query into the published literature, and then use query quality predictors. We then carry out an analysis that shows that the proposed quality indicators and the quality of the records have a mutual relationship, in which one depends on the other. We propose to represent record-literature consistency as a vector of these quality indicators. By reducing the dimensionality of this representation for visualization purposes using principal component analysis, we show that records which have been reported as inconsistent with the literature fall roughly in the same area, and therefore share similar characteristics. By manually analyzing records not previously known to be erroneous that fall in the same area than records know to be inconsistent, we show that one record out of four is inconsistent with respect to the literature. This high density of inconsistent record opens the way towards the development of automatic methods for the detection of faulty records. We conclude that literature inconsistency is a meaningful strategy for identifying suspicious records. https://github.com/rbouadjenek/DQBioinformatics.

  8. Analysis of Handling Processes of Record Versions in NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Yu. A. Grigorev

    2015-01-01

    Full Text Available This article investigates the handling processes versions of a record in NoSQL databases. The goal of this work is to develop a model, which enables users both to handle record versions and work with a record simultaneously. This model allows us to estimate both a time distribution for users to handle record versions and a distribution of the count of record versions. With eventual consistency (W=R=1 there is a possibility for several users to update any record simultaneously. In this case, several versions of records with the same key will be stored in database. When reading, the user obtains all versions, handles them, and saves a new version, while older versions are deleted. According to the model, the user’s time for handling the record versions consists of two parts: random handling time of each version and random deliberation time for handling a result. Record saving time and records deleting time are much less than handling time, so, they are ignored in the model. The paper offers two model variants. According to the first variant, client's handling time of one record version is calculated as the sum of random handling times of one version based on the count of record versions. This variant ignores explicitly the fact that handling time of record versions may depend on the update count, performed by the other users between the sequential updates of the record by the current client. So there is the second variant, which takes this feature into consideration. The developed models were implemented in the GPSS environment. The model experiments with different counts of clients and different ratio between one record handling time and results deliberation time were conducted. The analysis showed that despite the resemblance of model variants, a difference in change nature between average values of record versions count and handling time is significant. In the second variant dependences of the average count of record versions in database and

  9. From ISIS to CouchDB: Databases and Data Models for Bibliographic Records

    Directory of Open Access Journals (Sweden)

    Luciano Ramalho

    2011-04-01

    Full Text Available For decades bibliographic data has been stored in non-relational databases, and thousands of libraries in developing countries still use ISIS databases to run their OPACs. Fast forward to 2010 and the NoSQL movement has shown that non-relational databases are good enough for Google, Amazon.com and Facebook. Meanwhile, several Open Source NoSQL systems have appeared. This paper discusses the data model of one class of NoSQL products, semistructured, document-oriented databases exemplified by Apache CouchDB and MongoDB, and why they are well-suited to collective cataloging applications. Also shown are the methods, tools, and scripts used to convert, from ISIS to CouchDB, bibliographic records of LILACS, a key Latin American and Caribbean health sciences index operated by the Pan-American Health Organization.

  10. Completeness of metabolic disease recordings in Nordic national databases for dairy cows.

    Science.gov (United States)

    Espetvedt, M N; Wolff, C; Rintakoski, S; Lind, A; Østerås, O

    2012-06-01

    The four Nordic countries Denmark (DK), Finland (FI), Norway (NO) and Sweden (SE) all have national databases where diagnostic events in dairy cows are recorded. Comparing and looking at differences in disease occurrence between countries may give information on factors that influence disease occurrence, optimal diseases control and treatment strategies. For such comparisons to be valid, the data in these databases should be standardised and of good quality. The objective of the study presented here was to assess the quality of metabolic disease recordings, primarily milk fever and ketosis, in four Nordic national databases. Completeness of recording figures of database registrations at two different levels was chosen as a measure of data quality. Firstly, completeness of recording of all disease events on a farm regardless of veterinary involvement, called 'Farmer observed completeness', was determined. Secondly, completeness of recording of veterinary treated disease events only, called 'Veterinary treated completeness', was determined. To collect data for calculating these completeness levels a simple random sample of herds was obtained in each country. Farmers who were willing to participate, recorded for 4 months in 2008, on a purpose made registration form, any observed illness in cows, regardless of veterinary involvement. The number of participating herds was 105, 167, 179 and 129 in DK, FI, NO and SE respectively. In total these herds registered 247, 248, 177 and 218 metabolic events for analysis in DK, FI, NO and SE, respectively. Data from national databases were subsequently extracted, and the two sources of data were matched to find the proportion, or completeness, of diagnostic events registered by farmers that also existed in national databases. Matching was done using a common diagnostic code system and allowed for a discrepancy of 7 days for registered date of the event. For milk fever, the Farmer observed completeness was 77%, 67%, 79% and 79

  11. A new on-line electrocardiographic records database and computer routines for data analysis.

    Science.gov (United States)

    Ledezma, Carlos A; Severeyn, Erika; Perpiñán, Gilberto; Altuve, Miguel; Wong, Sara

    2014-01-01

    Gathering experimental data to test computer methods developed during a research is a hard work. Nowadays, some databases have been stored online that can be freely downloaded, however there is not a wide range of databases yet and not all pathologies are covered. Researchers with low resources are in need of more data they can consult for free. To cope with this we present an on-line portal containing a compilation of ECG databases recorded over the last two decades for research purposes. The first version of this portal contains four databases of ECG records: ischemic cardiopathy (72 patients, 3-lead ECG each), ischemic preconditioning (20 patients, 3-lead ECG each), diabetes (51 patients, 8-lead ECG each) and metabolic syndrome (25 subjects, 12-lead ECG each). In addition, one computer program and three routines are provided in order to correctly read the signals, and two digital filters along with two ECG waves detectors are provided for further processing. This portal will be constantly growing, other ECG databases and signal processing software will be uploaded. With this project, we give the scientific community a resource to avoid hours of data collection and to develop free software.

  12. Assessment of Residential History Generation Using a Public-Record Database

    Directory of Open Access Journals (Sweden)

    David C. Wheeler

    2015-09-01

    Full Text Available In studies of disease with potential environmental risk factors, residential location is often used as a surrogate for unknown environmental exposures or as a basis for assigning environmental exposures. These studies most typically use the residential location at the time of diagnosis due to ease of collection. However, previous residential locations may be more useful for risk analysis because of population mobility and disease latency. When residential histories have not been collected in a study, it may be possible to generate them through public-record databases. In this study, we evaluated the ability of a public-records database from LexisNexis to provide residential histories for subjects in a geographically diverse cohort study. We calculated 11 performance metrics comparing study-collected addresses and two address retrieval services from LexisNexis. We found 77% and 90% match rates for city and state and 72% and 87% detailed address match rates with the basic and enhanced services, respectively. The enhanced LexisNexis service covered 86% of the time at residential addresses recorded in the study. The mean match rate for detailed address matches varied spatially over states. The results suggest that public record databases can be useful for reconstructing residential histories for subjects in epidemiologic studies.

  13. A recovery method using recently updated record information in shared-nothing spatial database cluster

    Institute of Scientific and Technical Information of China (English)

    JEONG Myeong-ho; JANG Yong-ll; PARK Soon-young; BAE Hae-young

    2004-01-01

    A shared-nothing spatial database cluster is system that provides continuous service even if some system failure happens in any node. So, an efficient recovery of system failure is very important. Generally, the existing method recovers the failed node by using both cluster log and local log. This method, however, cause several problems that increase communication cost and size of cluster log. This paper proposes novel recovery method using recently updated record information in shared-nothing spatial database cluster. The proposed technique utilizes update information of records and pointers of actual data. This makes a reduction of log size and communication cost.Consequently, this reduces recovery time of failed node due to less processing of update operations.

  14. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  15. Archive of mass spectral data files on recordable CD-ROMs and creation and maintenance of a searchable computerized database.

    Science.gov (United States)

    Amick, G D

    1999-01-01

    A database containing names of mass spectral data files generated in a forensic toxicology laboratory and two Microsoft Visual Basic programs to maintain and search this database is described. The data files (approximately 0.5 KB/each) were collected from six mass spectrometers during routine casework. Data files were archived on 650 MB (74 min) recordable CD-ROMs. Each recordable CD-ROM was given a unique name, and its list of data file names was placed into the database. The present manuscript describes the use of search and maintenance programs for searching and routine upkeep of the database and creation of CD-ROMs for archiving of data files.

  16. Traceable P2P record exchange: a database-oriented approach

    Institute of Scientific and Technical Information of China (English)

    Fengrong LI; Takuya IIDA; Yoshiharu ISHIKAWA

    2008-01-01

    In recent years,peer-to-peer (P2P) technologies are used for flexible and scalable information exchange in the Internet,but there exist problems to be solved for reliable information exchange.It is important to trace how data circulates between peers and how data modifications are performed during the circulation before reaching the destination for enhancing the reliability of exchanged information.However,such lineage tracing is not easy in current P2P networks,since data replications and modifications are performed independently by autonomous peers--this creates a lack of reliability among the records exchanged.In this paper,we propose a framework for traceable record exchange in a P2P network.By managing historical information in distributed peers,we make the modification and exchange histories of records traceable.One of the features of our work is that the database technologies are utilized for realizing the framework.Histories are maintained in a relational database in each peer,and tracing queries are written in the datalog query language and executed in a P2P network by cooperating peers.This paper describes the concept of the framework and overviews the approach to query processing.

  17. NOAA Climate Data Record (CDR) of Zonal Mean Ozone Binary Database of Profiles (BDBP), version 1.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This NOAA Climate Data Record (CDR) of Zonal Mean Ozone Binary Database of Profiles (BDBP) dataset is a vertically resolved, global, gap-free and zonal mean dataset...

  18. Paleosecular variation during the PCRS based on a new database of sedimentary and volcanic records

    Science.gov (United States)

    Haldan, M. M.; Langereis, C. G.; Evans, M. E.

    2007-12-01

    We present a paleosecular variation study using a generalised global paleomagnetic sedimentary and volcanic database. We made use of all available (and suitable) - published and some new- sedimentary and volcanic paleomagnetic records corresponding to the Permo-Carboniferous Reversed Superchron (PCRS) interval to reanalyse all data. We focused on records with a sufficient number of samples, and acquired - whenever possible - the original data, or - as a second choice - parametrised published site means. Analysis of these paleomagnetic data in terms of latitude variation of the scatter of the virtual geomagnetic poles (VGPs) suggests that careful data selection is required and that some of the older studies may need to be redone using more modern methods, both in terms of sampling and laboratory treatment. In addition, high (southern and especially northern hemisphere) latitudes are notably lacking in published records. The transitional data is removed using a variable VGP cut-off angle which varies with latitude. We use also our extended sedimentary records from Permian red beds from the Lodève and Dôme de Barrot basins (S. France), a new detailed paleomagnetic study of the Permian volcanics in the Oslo graben (Norway), as well as new data from Carboniferous-Permian sediments from the Donbas basin (Ukraine). We compare our results with those from published paleosecular variation models and with recent (re)analyses of VGP scatter during different periods of the geological archive.

  19. Optimising workflow in andrology: a new electronic patient record and database

    Institute of Scientific and Technical Information of China (English)

    Frank Tüttelmann; C. Marc Luetjens; Eberhard Nieschlag

    2006-01-01

    Aim: To improve workflow and usability by introduction of a new electronic patient record (EPR) and database.Methods: Establishment of an EPR based on open source technology (MySQL database and PHP scripting language)in a tertiary care andrology center at a university clinic. Workflow analysis, a benchmark comparing the two systems and a survey for usability and ergonomics were carried out, Results: Worlflow optimizations (electronic ordering of laboratory analysis, elimination of transcription steps and automated referral letters) and the decrease in time required for data entry per patient to 71% ± 27%, P<0.05, lead to a workload reduction. The benchmark showed a significant performance increase (highest with starting the respective system: 1.3 ± 0.2 s vs. 11.1 ± 0.2 s, mean ± SD). In the survey, users rated the new system at least two ranks higher over its predecessor (P<0.01) in all sub-areas.Conclusion: With further improvements, today's EPR can evolve to substitute paper records, saving time (and possibly costs), supporting user satisfaction and expanding the basis for scientific evaluation when more data is electronically available. Newly introduced systems should be versatile, adaptable for users, and workflow-oriented to yield the highest benefit. If ready-made software is purchased, customization should be implemented during rollout.

  20. Performance evaluation of wavelet-based face verification on a PDA recorded database

    Science.gov (United States)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  1. The current knowledge on centipedes (Chilopoda) in Slovenia: faunistic and ecological records from a national database.

    Science.gov (United States)

    Ravnjak, Blanka; Kos, Ivan

    2015-01-01

    In spite of Slovenia's very high biodiversity, it has only a few animal groups that have been significantly investigated and are well known in this area. Slovenian researchers have studied only about half of the species known to be living in the country (Mršić 1997), but among well investigated species are centipedes. All available data about centipedes in Slovenia collected from 1921 to 2014 have been consolidated and constitute a general electronic database called "CHILOBIO", which was created to provide an easy overview of the Slovenian centipede fauna and to allow entry and interpretation of new data collected in future research. The level of investigation has been studied with this database, in conjunction with a geographic information system (GIS). In the study period, 109 species were identified from 350 localities in 109 of the 236 UTM 10 × 10 km quadrants which cover the study area. The south-central part of the country has been the subject of the best investigations, whereas there is an absence of data from the south-eastern, eastern and north-eastern regions The highest number of species (52) has been recorded near the Iška valley (Central Slovenia, quadrant VL68). In 48% of the UTM quadrants investigated fewer than 10 species were recorded and just 5 species were found in one locality. Seventeen species were reported only in the Dinaric region, 4 in the Prealpine-subpannonian region and 7 in the Primorska-submediterranean region.

  2. Increases in the completeness of disease records in dairy databases following changes in the criteria determining whether a record counts as correct

    DEFF Research Database (Denmark)

    Lind, Ann-Kristina; Houe, Hans; Espetvedt, Mari

    2012-01-01

    Background The four Nordic countries: Denmark (DK), Finland (FIN), Norway (NO) and Sweden (SE), all have national databases in which mainly records of treated animals are maintained. Recently, the completeness of locomotor disorder records in these databases has been evaluated using farmers......' recordings as a reference level. The objective of the present study was to see how previous estimates of completeness figures are affected by the criteria determining whether a recording in the database is to be judged correct. These demands included date of diagnosis and disease classification. In contrast...... increase in completeness figures lying in the range of 24--100%. Further increases were minor, or non-existent, when the window was expanded to +/-30 days. The same trend was seen for individual diagnoses. Conclusion In all four of the Nordic countries a common pattern can be observed: a further increase...

  3. Constraints on Biological Mechanism from Disease Comorbidity Using Electronic Medical Records and Database of Genetic Variants.

    Directory of Open Access Journals (Sweden)

    Steven C Bagley

    2016-04-01

    Full Text Available Patterns of disease co-occurrence that deviate from statistical independence may represent important constraints on biological mechanism, which sometimes can be explained by shared genetics. In this work we study the relationship between disease co-occurrence and commonly shared genetic architecture of disease. Records of pairs of diseases were combined from two different electronic medical systems (Columbia, Stanford, and compared to a large database of published disease-associated genetic variants (VARIMED; data on 35 disorders were available across all three sources, which include medical records for over 1.2 million patients and variants from over 17,000 publications. Based on the sources in which they appeared, disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. We find that disease pairs that are overrepresented in both electronic medical record systems and in VARIMED come from two main disease classes, autoimmune and neuropsychiatric. We furthermore identify specific genes that are shared within these disease groups.

  4. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  5. Disease types discovery from a large database of inpatient records: A sepsis study.

    Science.gov (United States)

    Gligorijevic, Djordje; Stojanovic, Jelena; Obradovic, Zoran

    2016-12-01

    Data-driven phenotype discoveries on Electronic Health Records (EHR) data have recently drawn benefits across many aspects of clinical practice. In the method described in this paper, we map a very large EHR database containing more than a million inpatient cases into a low dimensional space where diseases with similar phenotypes have similar representation. This embedding allows for an effective segmentation of diseases into more homogeneous categories, an important task of discovering disease types for precision medicine. In particular, many diseases have heterogeneous nature. For instance, sepsis, a systemic and progressive inflammation, can be caused by many factors, and can have multiple manifestations on different human organs. Understanding such heterogeneity of the disease can help in addressing many important issues regarding sepsis, including early diagnosis and treatment, which is of huge importance as sepsis is one of the main causes of in-hospital deaths in the United States. This study analyzes state of the art embedding models that have had huge success in various fields, applying them to disease embedding from EHR databases. Particular interest is given to learning multi-type representation of heterogeneous diseases, which leads to more homogeneous groups. Our results show evidence that such representations have phenotypes of higher quality and also provide benefit when predicting mortality of inpatient visits.

  6. Online database for mosquito (Diptera, Culicidae) occurrence records in French Guiana.

    Science.gov (United States)

    Talaga, Stanislas; Murienne, Jérôme; Dejean, Alain; Leroy, Céline

    2015-01-01

    A database providing information on mosquito specimens (Arthropoda: Diptera: Culicidae) collected in French Guiana is presented. Field collections were initiated in 2013 under the auspices of the CEnter for the study of Biodiversity in Amazonia (CEBA: http://www.labexceba.fr/en/). This study is part of an ongoing process aiming to understand the distribution of mosquitoes, including vector species, across French Guiana. Occurrences are recorded after each collecting trip in a database managed by the laboratory Evolution et Diversité Biologique (EDB), Toulouse, France. The dataset is updated monthly and is available online. Voucher specimens and their associated DNA are stored at the laboratory Ecologie des Forêts de Guyane (Ecofog), Kourou, French Guiana. The latest version of the dataset is accessible through EDB's Integrated Publication Toolkit at http://130.120.204.55:8080/ipt/resource.do?r=mosquitoes_of_french_guiana or through the Global Biodiversity Information Facility data portal at http://www.gbif.org/dataset/5a8aa2ad-261c-4f61-a98e-26dd752fe1c5 It can also be viewed through the Guyanensis platform at http://guyanensis.ups-tlse.fr.

  7. Geochronological database and classification system for age uncertainties in Neotropical pollen records

    Science.gov (United States)

    Flantua, S. G. A.; Blaauw, M.; Hooghiemstra, H.

    2016-02-01

    The newly updated inventory of palaeoecological research in Latin America offers an important overview of sites available for multi-proxy and multi-site purposes. From the collected literature supporting this inventory, we collected all available age model metadata to create a chronological database of 5116 control points (e.g. 14C, tephra, fission track, OSL, 210Pb) from 1097 pollen records. Based on this literature review, we present a summary of chronological dating and reporting in the Neotropics. Difficulties and recommendations for chronology reporting are discussed. Furthermore, for 234 pollen records in northwest South America, a classification system for age uncertainties is implemented based on chronologies generated with updated calibration curves. With these outcomes age models are produced for those sites without an existing chronology, alternative age models are provided for researchers interested in comparing the effects of different calibration curves and age-depth modelling software, and the importance of uncertainty assessments of chronologies is highlighted. Sample resolution and temporal uncertainty of ages are discussed for different time windows, focusing on events relevant for research on centennial- to millennial-scale climate variability. All age models and developed R scripts are publicly available through figshare, including a manual to use the scripts.

  8. Ontology-guided distortion control for robust-lossless database watermarking: application to inpatient hospital stay records.

    Science.gov (United States)

    Franco-Contreras, J; Coatrieux, G; Cuppens-Boulahia, N; Cuppens, F; Roux, C

    2014-01-01

    In this paper, we propose a new semantic distortion control method for database watermarking. It is based on the identification of the semantic links that exist in-between attribute's values in tuples by means of an ontology. Such a database distortion control provides the capability for any watermarking scheme to avoid incoherent records and consequently ensures: i) the normal interpretation of watermarked data, i.e. introducing a watermark semantically imperceptible; ii) prevent the identification by an attacker of watermarked tuples. The solution we present herein successfully combines this semantic distortion control method with a robust lossless watermarking scheme. Experimental results conducted on a medical database of more than one half million of inpatient hospital stay records also show a non-negligible gain of performance in terms of robustness and database distortion.

  9. Written records of historical tsunamis in the northeastern South China Sea – challenges associated with developing a new integrated database

    Directory of Open Access Journals (Sweden)

    A. Y. A. Lau

    2010-09-01

    Full Text Available Comprehensive analysis of 15 previously published regional databases incorporating more than 100 sources leads to a newly revised historical tsunami database for the northeastern (NE region of the South China Sea (SCS including Taiwan. The validity of each reported historical tsunami event listed in our database is assessed by comparing and contrasting the information and descriptions provided in the other databases. All earlier databases suffer from errors associated with inaccuracies in translation between different languages, calendars and location names. The new database contains 205 records of "events" reported to have occurred between AD 1076 and 2009. We identify and investigate 58 recorded tsunami events in the region. The validity of each event is based on the consistency and accuracy of the reports along with the relative number of individual records for that event. Of the 58 events, 23 are regarded as "valid" (confirmed events, three are "probable" events and six are "possible". Eighteen events are considered "doubtful" and eight events "invalid". The most destructive tsunami of the 23 valid events occurred in 1867 and affected Keelung, northern Taiwan, killing at least 100 people. Inaccuracies in the historical record aside, this new database highlights the occurrence and geographical extent of several large tsunamis in the NE SCS region and allows an elementary statistical analysis of annual recurrence intervals. Based on historical records from 1951–2009 the probability of a tsunami (from any source affecting the region in any given year is relatively high (33.4%. However, the likelihood of a tsunami that has a wave height >1 m, and/or causes fatalities and damage to infrastructure occurring in the region in any given year is low (1–2%. This work indicates the need for further research using coastal stratigraphy and inundation modeling to help validate some of the historical accounts of tsunamis as well as adequately evaluate

  10. An occurence records database of French Guiana harvestmen (Arachnida, Opiliones

    Directory of Open Access Journals (Sweden)

    Sébastien Cally

    2014-12-01

    Full Text Available This dataset provides information on specimens of harvestmen (Arthropoda, Arachnida, Opiliones collected in French Guiana. Field collections have been initiated in 2012 within the framework of the CEnter for the Study of Biodiversity in Amazonia (CEBA: www.labex-ceba.fr/en/. This dataset is a work in progress.  Occurrences are recorded in an online database stored at the EDB laboratory after each collecting trip and the dataset is updated on a monthly basis. Voucher specimens and associated DNA are also stored at the EDB laboratory until deposition in natural history Museums. The latest version of the dataset is publicly and freely accessible through our Integrated Publication Toolkit at http://130.120.204.55:8080/ipt/resource.do?r=harvestmen_of_french_guiana or through the Global Biodiversity Information Facility data portal at http://www.gbif.org/dataset/3c9e2297-bf20-4827-928e-7c7eefd9432c.

  11. An occurence records database of French Guiana harvestmen (Arachnida, Opiliones).

    Science.gov (United States)

    Cally, Sébastien; Solbès, Pierre; Grosso, Bernadette; Murienne, Jérôme

    2014-01-01

    This dataset provides information on specimens of harvestmen (Arthropoda, Arachnida, Opiliones) collected in French Guiana. Field collections have been initiated in 2012 within the framework of the CEnter for the Study of Biodiversity in Amazonia (CEBA: www.labex-ceba.fr/en/). This dataset is a work in progress.  Occurrences are recorded in an online database stored at the EDB laboratory after each collecting trip and the dataset is updated on a monthly basis. Voucher specimens and associated DNA are also stored at the EDB laboratory until deposition in natural history Museums. The latest version of the dataset is publicly and freely accessible through our Integrated Publication Toolkit at http://130.120.204.55:8080/ipt/resource.do?r=harvestmen_of_french_guiana or through the Global Biodiversity Information Facility data portal at http://www.gbif.org/dataset/3c9e2297-bf20-4827-928e-7c7eefd9432c.

  12. Standard Guide for Recording Mechanical Test Data of Fiber-Reinforced Composite Materials in Databases

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 This guide provides a common format for mechanical test data for composite materials for two purposes: (1) to establish data reporting requirements for test methods and ( 2) to provide information for the design of material property databases. This guide should be used in combination with Guide E 1309 which provides similar information to identify the composite material tested. 1.2 These guidelines are specific to mechanical tests of high-modulus fiber-reinforced composite materials. Types of tests considered in this guide include tension, compression, shear, flexure, open/filled hole, bearing, fracture toughness, and fatigue. The ASTM standards for which this guide was developed are listed in . The guidelines may also be useful for additional tests or materials. 1.3 This guide is the second part of a modular approach for which the first part is Guide E 1309. Guide E 1309 serves to identify the material, and this guide serves to describe mechanical testing procedures and variables and to record results....

  13. California dragonfly and damselfly (Odonata) database: temporal and spatial distribution of species records collected over the past century.

    Science.gov (United States)

    Ball-Damerow, Joan E; Oboyski, Peter T; Resh, Vincent H

    2015-01-01

    The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879-2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data.

  14. Development of a database to record orofacial manifestations in patients with rare diseases: a status report from the ROMSE (recording of orofacial manifestations in people with rare diseases) database.

    Science.gov (United States)

    Hanisch, M; Hanisch, L; Benz, K; Kleinheinz, J; Jackowski, J

    2017-02-23

    The aim of this working group was to establish a ROMSE (recording of orofacial manifestations in people with rare diseases) database to provide clinicians, patients, and their families with better information about these diseases. In 2011, we began to search the databases Orphanet, OMIM(®) (Online Mendelian Inheritance in Man(®)), and PubMed, for rare diseases with orofacial symptoms, and since 2013, the collected information has been incorporated into a web-based, freely accessible database. To date, 471 rare diseases with orofacial signs have been listed on ROMSE, and 10 main categories with 99 subcategories of signs such as different types of dental anomalies, changes in the oral mucosa, dysgnathia, and orofacial clefts, have been defined. The database provides a platform for general clinicians, orthodontists, and oral and maxillofacial surgeons to work on the best treatments.

  15. The Symposium on Second Generation Clinical Databases and the Electronic Dental Record.

    Science.gov (United States)

    Eisner, John

    1991-01-01

    The article reports on initial efforts of a consortium of 19 dental schools to cooperate in developing clinical databases and associated hardware and software. The article lists initial members, committee members, and follow-up activities, working toward development of databases on patient history, oral status, diagnosis, treatment evaluation,…

  16. California dragonfly and damselfly (Odonata database: temporal and spatial distribution of species records collected over the past century

    Directory of Open Access Journals (Sweden)

    Joan E. Ball-Damerow

    2015-02-01

    Full Text Available The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879–2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data.

  17. Validation of the diagnosis canine epilepsy in a Swedish animal insurance database against practice records

    DEFF Research Database (Denmark)

    Heske, Linda; Berendt, Mette; Jäderlund, Karin Hultin

    2014-01-01

    Canine epilepsy is one of the most common neurological conditions in dogs but the actual incidence of the disease remains unknown. A Swedish animal insurance database has previously been shown useful for the study of disease occurrence in companion animals. The dogs insured by this company...... represent a unique population for epidemiological studies, because they are representative of the general dog population in Sweden and are followed throughout their life allowing studies of disease incidence to be performed. The database covers 50% of all insured dogs (in the year 2012) which represents 40......% of the national dog population. Most commonly, dogs are covered by both veterinary care insurance and life insurance. Previous studies have shown that the general data quality is good, but the validity of a specific diagnosis should be examined carefully before using the database for incidence calculations...

  18. Western Monarch and Milkweed Habitat Suitability Assessment Project- Public Share Version of Species Occurence Records Database

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This data is a compilation of milkweed (genus Asclepias) and monarch butterfly (Danaus plexippus) occurrences and specimen records across the western United States...

  19. Validation of Nordic dairy cattle disease recording databases – completeness for locomotor disorders

    DEFF Research Database (Denmark)

    Lind, Ann-Kristina; Thomsen, Peter Thorup; Ersbøll, Annette Kjær

    2012-01-01

    . Completeness figures for farmer-recorded disease events were calculated on two different levels: the first refers to disease events that were observed on the farm regardless of whether a veterinarian had been involved (FARMER); the second refers to farmer records of cases attended by a veterinarian, i.......e. to veterinarian-treated disease events (VET). A sample of herds with 15 or more cows was obtained from a simple random sample of dairy farms in FIN, NO and SE, and from a systematic random sample in DK. There were 105, 167, 179 and 129 participating farmers in DK, FIN, NO and SE, respectively, and during two 2...

  20. Searching for Controlled Trials of Complementary and Alternative Medicine: A Comparison of 15 Databases

    Directory of Open Access Journals (Sweden)

    Elise Cogo

    2011-01-01

    Full Text Available This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL for finding controlled trials of complementary and alternative medicine (CAM. Fifteen databases were searched to identify controlled clinical trials (CCTs of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005. The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records—these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases.

  1. Searching for Controlled Trials of Complementary and Alternative Medicine: A Comparison of 15 Databases

    Science.gov (United States)

    Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David

    2011-01-01

    This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records—these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases. PMID:19468052

  2. Development of prostate cancer research database with the clinical data warehouse technology for direct linkage with electronic medical record system.

    Science.gov (United States)

    Choi, In Young; Park, Seungho; Park, Bumjoon; Chung, Byung Ha; Kim, Choung-Soo; Lee, Hyun Moo; Byun, Seok-Soo; Lee, Ji Youl

    2013-01-01

    In spite of increased prostate cancer patients, little is known about the impact of treatments for prostate cancer patients and outcome of different treatments based on nationwide data. In order to obtain more comprehensive information for Korean prostate cancer patients, many professionals urged to have national system to monitor the quality of prostate cancer care. To gain its objective, the prostate cancer database system was planned and cautiously accommodated different views from various professions. This prostate cancer research database system incorporates information about a prostate cancer research including demographics, medical history, operation information, laboratory, and quality of life surveys. And, this system includes three different ways of clinical data collection to produce a comprehensive data base; direct data extraction from electronic medical record (EMR) system, manual data entry after linking EMR documents like magnetic resonance imaging findings and paper-based data collection for survey from patients. We implemented clinical data warehouse technology to test direct EMR link method with St. Mary's Hospital system. Using this method, total number of eligible patients were 2,300 from 1997 until 2012. Among them, 538 patients conducted surgery and others have different treatments. Our database system could provide the infrastructure for collecting error free data to support various retrospective and prospective studies.

  3. A method to implement fine-grained access control for personal health records through standard relational database queries.

    Science.gov (United States)

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials.

  4. Babesiosis Occurrence among the Elderly in the United States, as Recorded in Large Medicare Databases during 2006–2013

    Science.gov (United States)

    Menis, Mikhail; Forshee, Richard A.; Kumar, Sanjai; McKean, Stephen; Warnock, Rob; Izurieta, Hector S.; Gondalia, Rahul; Johnson, Chris; Mintz, Paul D.; Walderhaug, Mark O.; Worrall, Christopher M.; Kelman, Jeffrey A.; Anderson, Steven A.

    2015-01-01

    Background Human babesiosis, caused by intraerythrocytic protozoan parasites, can be an asymptomatic or mild-to-severe disease that may be fatal. The study objective was to assess babesiosis occurrence among the U.S. elderly Medicare beneficiaries, ages 65 and older, during 2006–2013. Methods Our retrospective claims-based study utilized large Medicare administrative databases. Babesiosis occurrence was ascertained by recorded ICD-9-CM diagnosis code. The study assessed babesiosis occurrence rates (per 100,000 elderly Medicare beneficiaries) overall and by year, age, gender, race, state of residence, and diagnosis months. Results A total of 10,305 elderly Medicare beneficiaries had a recorded babesiosis diagnosis during the eight-year study period, for an overall rate of about 5 per 100,000 persons. Study results showed a significant increase in babesiosis occurrence over time (pBabesiosis occurrence was significantly higher among males vs. females and whites vs. non-whites. Conclusion Our study reveals increasing babesiosis occurrence among the U.S. elderly during 2006–2013, with highest rates in the babesiosis-endemic states. The study also shows variation in babesiosis occurrence by age, gender, race, state of residence, and diagnosis months. Overall, our study highlights the importance of large administrative databases in assessing the occurrence of emerging infections in the United States. PMID:26469785

  5. Babesiosis Occurrence among the Elderly in the United States, as Recorded in Large Medicare Databases during 2006-2013.

    Science.gov (United States)

    Menis, Mikhail; Forshee, Richard A; Kumar, Sanjai; McKean, Stephen; Warnock, Rob; Izurieta, Hector S; Gondalia, Rahul; Johnson, Chris; Mintz, Paul D; Walderhaug, Mark O; Worrall, Christopher M; Kelman, Jeffrey A; Anderson, Steven A

    2015-01-01

    Human babesiosis, caused by intraerythrocytic protozoan parasites, can be an asymptomatic or mild-to-severe disease that may be fatal. The study objective was to assess babesiosis occurrence among the U.S. elderly Medicare beneficiaries, ages 65 and older, during 2006-2013. Our retrospective claims-based study utilized large Medicare administrative databases. Babesiosis occurrence was ascertained by recorded ICD-9-CM diagnosis code. The study assessed babesiosis occurrence rates (per 100,000 elderly Medicare beneficiaries) overall and by year, age, gender, race, state of residence, and diagnosis months. A total of 10,305 elderly Medicare beneficiaries had a recorded babesiosis diagnosis during the eight-year study period, for an overall rate of about 5 per 100,000 persons. Study results showed a significant increase in babesiosis occurrence over time (pBabesiosis occurrence was significantly higher among males vs. females and whites vs. non-whites. Our study reveals increasing babesiosis occurrence among the U.S. elderly during 2006-2013, with highest rates in the babesiosis-endemic states. The study also shows variation in babesiosis occurrence by age, gender, race, state of residence, and diagnosis months. Overall, our study highlights the importance of large administrative databases in assessing the occurrence of emerging infections in the United States.

  6. Selection of medical diagnostic codes for analysis of electronic patient records. Application to stroke in a primary care database.

    Directory of Open Access Journals (Sweden)

    Martin C Gulliford

    Full Text Available BACKGROUND: Electronic patient records from primary care databases are increasingly used in public health and health services research but methods used to identify cases with disease are not well described. This study aimed to evaluate the relevance of different codes for the identification of acute stroke in a primary care database, and to evaluate trends in the use of different codes over time. METHODS: Data were obtained from the General Practice Research Database from 1997 to 2006. All subjects had a minimum of 24 months of up-to-standard record before the first recorded stroke diagnosis. Initially, we identified stroke cases using a supplemented version of the set of codes for prevalent stroke used by the Office for National Statistics in Key health statistics from general practice 1998 (ONS codes. The ONS codes were then independently reviewed by four raters and a restricted set of 121 codes for 'acute stroke' was identified but the kappa statistic was low at 0.23. RESULTS: Initial extraction of data using the ONS codes gave 48,239 cases of stroke from 1997 to 2006. Application of the restricted set of codes reduced this to 39,424 cases. There were 2,288 cases whose index medical codes were for 'stroke annual review' and 3,112 for 'stroke monitoring'. The frequency of stroke review and monitoring codes as index codes increased from 9 per year in 1997 to 1,612 in 2004, 1,530 in 2005 and 1,424 in 2006. The one year mortality of cases with the restricted set of codes was 29.1% but for 'stroke annual review,' 4.6% and for 'stroke monitoring codes', 5.7%. CONCLUSION: In the analysis of electronic patient records, different medical codes for a single condition may have varying clinical and prognostic significance; utilisation of different medical codes may change over time; researchers with differing clinical or epidemiological experience may have differing interpretations of the relevance of particular codes. There is a need for greater

  7. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  8. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    Science.gov (United States)

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  9. Subject and authorship of records related to the Organization for Tropical Studies (OTS) in BINABITROP, a comprehensive database about Costa Rican biology.

    Science.gov (United States)

    Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz

    2013-06-01

    BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  10. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    Science.gov (United States)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  11. Intended Use of a Building in Terms of Updating the Cadastral Database and Harmonizing the Data with other Public Records

    Directory of Open Access Journals (Sweden)

    Buśko Małgorzata

    2017-06-01

    Full Text Available According to the original wording of the Regulation on the register of land and buildings of 2001, in the real estate cadastre there was one attribute associated with the use of a building structure - its intended use, which was applicable until the amendment to the Regulation was introduced in 2013. Then, additional attributes were added, i.e. the type of the building according to the Classification of Fixed Assets (KST, the class of the building according to the Polish Classification of Types of Constructions (PKOB and, at the same time, the main functional use and other functions of the building remained in the Regulation as well. The record data on buildings are captured for the real estate cadastre from other data sets, for example those maintained by architectural and construction authorities. At the same time, the data contained in the cadastre, after they have been entered or changed in the database, are transferred to other registers, such as tax records, or land and mortgage court registers. This study is the result of the analysis of the laws applicable to the specific units and registers. A list of discrepancies in the attributes occurring in the different registers was prepared. The practical part of the study paid particular attention to the legal bases and procedures for entering the function of a building in the real estate cadastre, which is extremely significant, as it is the attribute determining the property tax basis.

  12. Use of information on disease diagnoses from databases for animal health economic, welfare and food safety purposes: strengths and limitations of recordings.

    Science.gov (United States)

    Houe, Hans; Gardner, Ian Andrew; Nielsen, Liza Rosenbaum

    2011-01-01

    Many animal health, welfare and food safety databases include data on clinical and test-based disease diagnoses. However, the circumstances and constraints for establishing the diagnoses vary considerably among databases. Therefore results based on different databases are difficult to compare and compilation of data in order to perform meta-analysis is almost impossible. Nevertheless, diagnostic information collected either routinely or in research projects is valuable in cross comparisons between databases, but there is a need for improved transparency and documentation of the data and the performance characteristics of tests used to establish diagnoses. The objective of this paper is to outline the circumstances and constraints for recording of disease diagnoses in different types of databases, and to discuss these in the context of disease diagnoses when using them for additional purposes, including research. Finally some limitations and recommendations for use of data and for recording of diagnostic information in the future are given. It is concluded that many research questions have such a specific objective that investigators need to collect their own data. However, there are also examples, where a minimal amount of extra information or continued validation could make sufficient improvement of secondary data to be used for other purposes. Regardless, researchers should always carefully evaluate the opportunities and constraints when they decide to use secondary data. If the data in the existing databases are not sufficiently valid, researchers may have to collect their own data, but improved recording of diagnostic data may improve the usefulness of secondary diagnostic data in the future.

  13. The Design and Optimization of Student Records Management System Database%学生档案管理系统数据库设计与优化

    Institute of Scientific and Technical Information of China (English)

    容湘萍

    2014-01-01

    Targeting the student records management system which is being developed by the author's school ,describes the status of the entire database system .The paper focuses on the design methods of the system database ,and explores ways to optimize the database .%阐述了数据库在整个系统中的地位,重点介绍了该系统数据库设计方法,以及对数据库进行优化的方法。

  14. Temas de investigación sobre aspectos psicosociales del deporte a través de la base de datos PSYCINFO (1887-2001

    Directory of Open Access Journals (Sweden)

    Isabel Castillo

    2005-01-01

    Full Text Available En el presente trabajo se analiza el interés y representatividad de los temas de investigación sobre los aspectos psicosociales del deporte, así como las principales revistas científicas en los que se publican. Se ha realizado un análisis bibliométrico de la base de datos PsycINFO con un periodo que abarca desde 1887 hasta octubre de 2001. Los resultados informan de un gran interés por la investigación psicosocial en el ámbito de la psicología del deporte, así como de un incremento constante en el número de trabajos publicados. Entre los temas psicosociales más estudiados destacan los centrados en la participación deportiva, la motivación, el género y sexo, las emociones, los grupos y las actitudes. Más del 60% de los trabajos publicados en las principales revistas científicas del ámbito de la psicología del deporte, versan sobre aspectos psicosociales del deporte, aunque algunos de ellos sean de interés en otros ámbitos de la psicología.

  15. A Performance Analysis of Modified Mid-Square and Mid-Product Techniques to Minimize the Redundancy for Retrieval of Database Records

    Directory of Open Access Journals (Sweden)

    K. M. Sundaram

    2010-01-01

    Full Text Available Problem statement: An important tool in the field of education methodology is examination. As far as teaching-learning-evaluation process is concerned, the major task associated with the objective-type examination system is the administration of question paper setting. The lack of expertise and time are the major constraints that are encountered in the task of setting objective type question papers. During retrieval of records from a objective type question bank, redundancy may occur. To solve this problem, an approach needed to retrieve records from a database without redundancy. Approach: The task associated in generating the required collection of questions from a question bank, with minimal redundancy as far as possible in the retrieval of records from the question bank using mid-square and mid-product techniques for random number generation were discussed in this study. Results: A modified approach was identified and handled to generate random numbers and used to retrieve records from a database Conclusion: The suggested modified approach was more suitable for retrieving records from a database of even smaller size.

  16. Water and carbon stable isotope records from natural archives: a new database and interactive online platform for data browsing, visualizing and downloading

    Science.gov (United States)

    Bolliet, Timothé; Brockmann, Patrick; Masson-Delmotte, Valérie; Bassinot, Franck; Daux, Valérie; Genty, Dominique; Landais, Amaelle; Lavrieux, Marlène; Michel, Elisabeth; Ortega, Pablo; Risi, Camille; Roche, Didier M.; Vimeux, Françoise; Waelbroeck, Claire

    2016-08-01

    Past climate is an important benchmark to assess the ability of climate models to simulate key processes and feedbacks. Numerous proxy records exist for stable isotopes of water and/or carbon, which are also implemented inside the components of a growing number of Earth system model. Model-data comparisons can help to constrain the uncertainties associated with transfer functions. This motivates the need of producing a comprehensive compilation of different proxy sources. We have put together a global database of proxy records of oxygen (δ18O), hydrogen (δD) and carbon (δ13C) stable isotopes from different archives: ocean and lake sediments, corals, ice cores, speleothems and tree-ring cellulose. Source records were obtained from the georeferenced open access PANGAEA and NOAA libraries, complemented by additional data obtained from a literature survey. About 3000 source records were screened for chronological information and temporal resolution of proxy records. Altogether, this database consists of hundreds of dated δ18O, δ13C and δD records in a standardized simple text format, complemented with a metadata Excel catalog. A quality control flag was implemented to describe age markers and inform on chronological uncertainty. This compilation effort highlights the need to homogenize and structure the format of datasets and chronological information as well as enhance the distribution of published datasets that are currently highly fragmented and scattered. We also provide an online portal based on the records included in this database with an intuitive and interactive platform (http://climateproxiesfinder.ipsl.fr/), allowing one to easily select, visualize and download subsets of the homogeneously formatted records that constitute this database, following a choice of search criteria, and to upload new datasets. In the last part, we illustrate the type of application allowed by our database by comparing several key periods highly investigated by the

  17. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  18. Childhood immunization rates in rural Intibucá, Honduras: an analysis of a local database tool and community health center records for assessing and improving vaccine coverage

    Directory of Open Access Journals (Sweden)

    He Yuan

    2012-12-01

    Full Text Available Abstract Background Vaccines are highly effective at preventing infectious diseases in children, and prevention is especially important in resource-limited countries where treatment is difficult to access. In Honduras, the World Health Organization (WHO reports very high immunization rates in children. To determine whether or not these estimates accurately depict the immunization coverage in non-urban regions of the country, we compared the WHO data to immunization rates obtained from a local database tool and community health center records in rural Intibucá, Honduras. Methods We used data from two sources to comprehensively evaluate immunization rates in the area: 1 census data from a local database and 2 immunization data collected at health centers. We compared these rates using logistic regression, and we compared them to publicly available WHO-reported estimates using confidence interval inclusion. Results We found that mean immunization rates for each vaccine were high (range 84.4 to 98.8 percent, but rates recorded at the health centers were significantly higher than those reported from the census data (p≤0.001. Combining the results from both databases, the mean rates of four out of five vaccines were less than WHO-reported rates (p p=0.03. The rates by individual vaccine were similar across townships (p >0.05, except for diphtheria/tetanus/pertussis vaccine (p=0.02 and oral polio vaccine (p Conclusions Immunization rates in Honduras were high across data sources, though most of the rates recorded in rural Honduras were less than WHO-reported rates. Despite geographical difficulties and barriers to access, the local database and Honduran community health workers have developed a thorough system for ensuring that children receive their immunizations on time. The successful integration of community health workers and a database within the Honduran decentralized health system may serve as a model for other immunization programs in

  19. Recording Database Searches for Systematic Reviews - What is the Value of Adding a Narrative to Peer-Review Checklists? A Case Study of NICE Interventional Procedures Guidance

    Directory of Open Access Journals (Sweden)

    Jenny Craven

    2011-01-01

    Full Text Available This paper discusses the value of open and transparent methods for recording systematic database search strategies, showing how they have been applied at the National Institute for Health and Clinical Excellence (NICE, see Appendix C for definitions in the United Kingdom (UK.Objective – The objectives are to: 1 Discuss the value of search strategy recording methods. 2 Assess any limitations to the practical application of a checklist approach. 3 Make recommendations for recording systematic database searches.Methods – The procedures for recording searches for Interventional Procedures Guidance at NICE were examined. A sample of current methods for recording systematic searches identified in the literature was compared to the NICE processes. The case study analyses the search conducted for evidence about an interventional procedure and shows the practical issues involved in recording the database strategies. The case study explores why relevant papers were not retrieved by a search strategy meeting all of the criteria on the checklist used to peer review it. The evidence was required for guidance on non-rigid stabilisation techniques for the treatment of low back pain.Results – The analysis shows that amending the MEDLINE strategy to make it more sensitive would have increased its yield by 6614 articles. Examination of the search records together with correspondence between the analyst and the searcher reveals the peer reviewer had approved the search because its sensitivity was appropriate for the purpose of producing Interventional Procedures Guidance. The case study demonstrates the limitations of relying on a checklist to ensure the quality of a database search without having any contextual information.Conclusion – It is difficult for the peer reviewer to assess the subjective elements of a search without knowing why it has a particular structure or what the searcher intended. There is a risk that the peer reviewer will concentrate on

  20. The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent

    NARCIS (Netherlands)

    McKeown, G.; Valstar, M.F.; Cowie, R.; Pantic, Maja; Schroeder, M.

    SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an "operator‿

  1. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  2. 77 FR 3455 - Privacy Act of 1974; System of Records-Migrant Education Bypass Program Student Database

    Science.gov (United States)

    2012-01-24

    ... the bypass would add substantially to the children's welfare. The target states no longer choose to... information on eligible migratory children in order to: (1) Verify children's eligibility for MEP services; (2... collecting, maintaining, and transferring children's educational records, that migratory children in the...

  3. Using electronic medical records to determine prevalence and treatment of mental disorders in primary care: a database study

    OpenAIRE

    Gleeson, M; Hannigan, Ailish; Jamali, R; Su Lin, K; Klimas, Jan; Mannix, M; Nathan, Yoga; O'Connor, R; O'Gorman, Clodagh S.; Dunne, Colum; Meagher, David; Cullen, Walter

    2015-01-01

    peer-reviewed Objectives: With prevention and treatment of mental disorders a challenge for primary care and increasing capability of electronic medical records (EMRs) to facilitate research in practice, we aim to determine the prevalence and treatment of mental disorders by using routinely collected clinical data contained in EMRs. Methods: We reviewed EMRs of patients randomly sampled from seven general practices, by piloting a study instrument and extracting data on menta...

  4. Reclink: aplicativo para o relacionamento de bases de dados, implementando o método probabilistic record linkage Reclink: an application for database linkage implementing the probabilistic record linkage method

    Directory of Open Access Journals (Sweden)

    Kenneth R. de Camargo Jr.

    2000-06-01

    Full Text Available Apresenta-se um sistema de relacionamento de bases de dados fundamentado na técnica de relacionamento probabilístico de registros, desenvolvido na linguagem C++ com o ambiente de programação Borland C++ Builder versão 3.0. O sistema foi testado a partir de fontes de dados de diferentes tamanhos, tendo sido avaliado em tempo de processamento e sensibilidade para a identificação de pares verdadeiros. O tempo gasto com o processamento dos registros foi menor quando se empregou o programa do que ao ser realizado manualmente, em especial, quando envolveram bases de maior tamanho. As sensibilidades do processo manual e do processo automático foram equivalentes quando utilizaram bases com menor número de registros; entretanto, à medida que as bases aumentaram, percebeu-se tendência de diminuição na sensibilidade apenas no processo manual. Ainda que em fase inicial de desenvolvimento, o sistema apresentou boa performance tanto em velocidade quanto em sensibilidade. Embora a performance dos algoritmos utilizados tenha sido satisfatória, o objetivo é avaliar outras rotinas, buscando aprimorar o desempenho do sistema.This paper presents a system for database linkage based on the probabilistic record linkage technique, developed in the C++ language with the Borland C++ Builder version 3.0 programming environment. The system was tested in the linkage of data sources of different sizes, evaluated both in terms of processing time and sensitivity for identifying true record pairs. Significantly less time was spent in record processing when the program was used, as compared to manual processing, especially in situations where larger databases were used. Manual and automatic processes had equivalent sensitivities in situations where we used databases with fewer records. However, as the number of records grew we noticed a clear reduction in the sensitivity of the manual process, but not in the automatic one. Although in its initial stage of

  5. Estimation of caffeine intake in Japanese adults using 16 d weighed diet records based on a food composition database newly developed for Japanese populations.

    Science.gov (United States)

    Yamada, Mai; Sasaki, Satoshi; Murakami, Kentaro; Takahashi, Yoshiko; Okubo, Hitomi; Hirota, Naoko; Notsu, Akiko; Todoriki, Hidemi; Miura, Ayako; Fukui, Mitsuru; Date, Chigusa

    2010-05-01

    Previous studies in Western populations have linked caffeine intake with health status. While detailed dietary assessment studies in these populations have shown that the main contributors to caffeine intake are coffee and tea, the wide consumption of Japanese and Chinese teas in Japan suggests that sources of intake in Japan may differ from those in Western populations. Among these teas, moreover, caffeine content varies widely among the different forms consumed (brewed, canned or bottled), suggesting the need for detailed dietary assessment in estimating intake in Japanese populations. Here, because a caffeine composition database or data obtained from detailed dietary assessment have not been available, we developed a database for caffeine content in Japanese foods and beverages, and then used it to estimate intake in a Japanese population. The caffeine food composition database was developed using analytic values from the literature, 16 d weighed diet records were collected, and caffeine intake was estimated from the 16 d weighed diet records. Four areas in Japan, Osaka (Osaka City), Okinawa (Ginowan City), Nagano (Matsumoto City) and Tottori (Kurayoshi City), between November 2002 and September 2003. Two hundred and thirty Japanese adults aged 30-69 years. Mean caffeine intake was 256.2 mg/d for women and 268.3 mg/d for men. The major contributors to intake were Japanese and Chinese teas and coffee (47 % each). Caffeine intake above 400 mg/d, suggested in reviews to possibly have negative health effects, was seen in 11 % of women and 15 % of men. In this Japanese population, caffeine intake was comparable to the estimated values reported in Western populations.

  6. Comparing shingles incidence and complication rates from medical record review and administrative database estimates: how close are they?

    Science.gov (United States)

    Yawn, Barbara P; Wollan, Peter; St Sauver, Jennifer

    2011-11-01

    Accurate rates of herpes zoster incidence and complication have become of greater interest as studies have suggested an increasing temporal trend in incidence rates across all age groups and long-term follow-up studies of vaccine effectiveness are required by the Food and Drug Administration. This study compares the results obtained from the most commonly used method to obtain herpes zoster data (rates obtained from administrative data) with results obtained when administrative data are supplemented by medical record review. Administrative billing code data identified 1,959 cases of herpes zoster in Olmsted County, Minnesota, adults between January 1, 1996, and December 31, 2001. Of those 1,959 cases, 1,669 (85.2%) could be confirmed by medical record review, a decrease in incidence rate of 14.8%, resulting in a decrease of 0.61/1,000 person-years when adjusted to the US adult population. Complication rates were also significantly different between the 2 methods. It is not clear if the 15% decrease in incidence rates would be seen in every administrative data set or if the lack of confirmation of cases may be variable in both validity and reproducibility between data sets, making estimations in temporal trends and pre/post-vaccine rates difficult to compare across data resources.

  7. Analysis of 50-y record of surface (137)Cs concentrations in the global ocean using the HAM-global database.

    Science.gov (United States)

    Inomata, Yayoi; Aoyama, Michio; Hirose, Katsumi

    2009-01-01

    We investigated spatial and temporal variations in (137)Cs concentrations in the surface waters of the global ocean for the period from 1957 to 2005 using the "HAM database - a global version". Based on the 0.5-y average value of (137)Cs concentrations in the surface water in each sea area, we classified the temporal variations into four types. (1) In the North Pacific Ocean where there was high fallout from atmospheric nuclear weapons tests, the rates of decrease in the (137)Cs concentrations changed over the five decades: the rate of decrease from the 1950s to the 1970s was much faster than that after the 1970s, and the (137)Cs concentrations were almost constant after the 1990s. Latitudinal differences in (137)Cs concentrations in the North Pacific Ocean became small with time. (2) In the equatorial Pacific and Indian Oceans, the (137)Cs concentrations varied within a constant range in the 1970s and 1980s, suggesting the advection of (137)Cs from areas of high global fallout in the mid-latitudes of the North Pacific Ocean. (3) In the eastern South Pacific and Atlantic Oceans (south of 40 degrees S), the concentrations decreased exponentially over the five decades. (4) In the Arctic and North Atlantic Oceans, including marginal seas, (137)Cs concentrations were strongly controlled by discharge from nuclear reprocessing plants after the late 1970s. The apparent half-residence times of (137)Cs in the surface waters of the global ocean from 1970 to 2005 ranged from 4.5 to 36.8 years. The apparent half-residence times were longer in the equatorial region and shorter in the higher latitudes. There was no notable difference between the latitudinal distributions of the apparent half-residence times in the Pacific and Indian Oceans. These results suggest that (137)Cs in the North Pacific Ocean is transported to the equatorial, South Pacific, and Indian Oceans by the oceanic circulation.

  8. Establishing a Dynamic Database of Blue and Fin Whale Locations from Recordings at the IMS CTBTO hydro-acoustic network. The Baleakanta Project

    Science.gov (United States)

    Le Bras, R. J.; Kuzma, H.

    2013-12-01

    Falling as they do into the frequency range of continuously recording hydrophones (15-100Hz), blue and fin whale songs are a significant source of noise on the hydro-acoustic monitoring array of the International Monitoring System (IMS) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). One researcher's noise, however, can be a very interesting signal in another field of study. The aim of the Baleakanta Project (www.baleakanta.org) is to flag and catalogue these songs, using the azimuth and slowness of the signal measured at multiple hydrophones to solve for the approximate location of singing whales. Applying techniques borrowed from human speaker identification, it may even be possible to recognize the songs of particular individuals. The result will be a dynamic database of whale locations and songs with known individuals noted. This database will be of great value to marine biologists studying cetaceans, as there is no existing dataset which spans the globe over many years (more than 15 years of data have been collected by the IMS). Current whale song datasets from other sources are limited to detections made on small, temporary listening devices. The IMS song catalogue will make it possible to study at least some aspects of the global migration patterns of whales, changes in their songs over time, and the habits of individuals. It is believed that about 10 blue whale 'cultures' exist with distinct vocal patterns; the IMS song catalogue will test that number. Results and a subset of the database (delayed in time to mitigate worries over whaling and harassment of the animals) will be released over the web. A traveling museum exhibit is planned which will not only educate the public about whale songs, but will also make the CTBTO and its achievements more widely known. As a testament to the public's enduring fascination with whales, initial funding for this project has been crowd-sourced through an internet campaign.

  9. Estimation of habitual iodine intake in Japanese adults using 16 d diet records over four seasons with a newly developed food composition database for iodine.

    Science.gov (United States)

    Katagiri, Ryoko; Asakura, Keiko; Sasaki, Satoshi; Hirota, Naoko; Notsu, Akiko; Miura, Ayako; Todoriki, Hidemi; Fukui, Mitsuru; Date, Chigusa

    2015-08-28

    Although habitual seaweed consumption in Japan would suggest that iodine intake in Japanese is exceptionally high, intake data from diet records are limited. In the present study, we developed a composition database of iodine and estimated the habitual intake of iodine among Japanese adults. Missing values for iodine content in the existing composition table were imputed based on established criteria. 16 d diet records (4 d over four seasons) from adults (120 women aged 30-69 years and 120 men aged 30-76 years) living in Japan were collected, and iodine intake was estimated. Habitual intake was estimated with the Best-power method. Totally, 995 food items were imputed. The distribution of iodine intake in 24 h was highly skewed, and approximately 55 % of 24 h values were foods (kelp or soup stock) on one or more days of the sixteen survey days. The mean (median) habitual iodine intake was 1414 (857) μg/d for women and 1572 (1031) μg/d for men. Older participants had higher intake than younger participants. The major contributors to iodine intake were kelp (60 %) and soup stock (30 %). Habitual iodine intake among Japanese was sufficient or higher than the tolerable upper intake level, particularly in older generations. The association between high iodine intake as that observed in the present study and thyroid disease requires further study.

  10. Automating The Work at The Skin and Allergy Private Clinic : A Case Study on Using an Imaging Database to Manage Patients Records

    Science.gov (United States)

    Alghalayini, Mohammad Abdulrahman

    Today, many institutions and organizations are facing serious problem due to the tremendously increasing size of documents, and this problem is further triggering the storage and retrieval problems due to the continuously growing space and efficiency requirements. This problem is becoming more complex with time and the increase in the size and number of documents in an organization; therefore, there is a world wide growing demand to address this problem. This demand and challenge can be met by converting the tremendous amount of paper documents to images using a process to enable specialized document imaging people to select the most suitable image type and scanning resolution to use when there is a need for storing documents images. This documents management process, if applied, attempts to solve the problem of the image storage type and size to some extent. In this paper, we present a case study resembling an applied process to manage the registration of new patients in a private clinic and to optimize following up the registered patients after having their information records stored in an imaging database system; therefore, through this automation approach, we optimize the work process and maximize the efficiency of the Skin and Allergy Clinic tasks.

  11. Investigating the Quality and Maintenance Issues of Bibliographic Records Provided by the e-Book Supply Chain: Using the Operation of the Taiwan Academic E-Book &Database Consortium as an Example

    Directory of Open Access Journals (Sweden)

    Chao-Chen Chen

    2014-04-01

    Full Text Available It is an important trend to expand the use of bibliographic records from the supply chain for libraries. However, what are the sources of vendors’ bibliographic records? Their quality? How do libraries deal with these bibliographic records? Are they satisfied them? This study first drawn 1,080 bibliographic records from 29 e-book products, and MarcEdit was used to check their quality and 14% were found to contain errors. Secondly, this study interviewed 12 vendors and found out that bibliographic records of western books were mostly copied from the OCLC, and there were also bibliographic records from the original vendors who commissioned outsource companies to do the cataloging process. In addition, there were those cataloged by Taiwanese manufacturers themselves. Lastly, this study sent questionnaires to Taiwan Academic E-Book & Database Consortium members to survey their satisfaction on bibliographic records provided by vendors and their recommendations thereof. This study shows that most libraries have inputted the bibliographic records of the e-books into their OPAC system and are generally satisfied with the bibliographic records provided by the vendors, though opinions vary on the accuracy of these records.

  12. The role for 'reminders' in dental traumatology: 4. The use of a computer database for recording dento-alveolar trauma in comparison to unstructured and structured paper-based methods.

    Science.gov (United States)

    Day, Peter F; Duggal, Monty S; Kiefte, Barbera; Balmer, Richard C; Roberts, Graham J

    2006-10-01

    The aims of this study were to investigate the effectiveness of a computer database (CD) developed for this study, a plain paper unstructured history (USH) and structured histories (SH) for the recording of important prognostic factors for simulated dento-alveolar trauma. Twelve vocational trainees, seven postgraduates in paediatric dentistry and 24 general dental practioners were randomly assigned to using USH, SH or CD. Each dentist visited a series of simulated trauma cases (with models, photos, radiographs and actors) and was asked to record important prognostic factors for each injury and make a diagnosis. There were a total of 243 dentist contacts with the trauma stations. The average percentage of important prognostic factors recorded per station was: USH 53%, SH 75.3% and CD 58.6%. SH was significantly better than the other two methods (P trauma cases used in this study. At present, the introduction of our CD for recording of trauma is not justified without significant modification.

  13. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  14. A Quality System Database

    Science.gov (United States)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  15. Exploiting the potential of large databases of electronic health records for research using rapid search algorithms and an intuitive query interface.

    Science.gov (United States)

    Tate, A Rosemary; Beloff, Natalia; Al-Radwan, Balques; Wickson, Joss; Puri, Shivani; Williams, Timothy; Van Staa, Tjeerd; Bleach, Adrian

    2014-01-01

    UK primary care databases, which contain diagnostic, demographic and prescribing information for millions of patients geographically representative of the UK, represent a significant resource for health services and clinical research. They can be used to identify patients with a specified disease or condition (phenotyping) and to investigate patterns of diagnosis and symptoms. Currently, extracting such information manually is time-consuming and requires considerable expertise. In order to exploit more fully the potential of these large and complex databases, our interdisciplinary team developed generic methods allowing access to different types of user. Using the Clinical Practice Research Datalink database, we have developed an online user-focused system (TrialViz), which enables users interactively to select suitable medical general practices based on two criteria: suitability of the patient base for the intended study (phenotyping) and measures of data quality. An end-to-end system, underpinned by an innovative search algorithm, allows the user to extract information in near real-time via an intuitive query interface and to explore this information using interactive visualization tools. A usability evaluation of this system produced positive results. We present the challenges and results in the development of TrialViz and our plans for its extension for wider applications of clinical research. Our fast search algorithms and simple query algorithms represent a significant advance for users of clinical research databases.

  16. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  17. An Earthquake Ground Motion Database System with Automatic Record Selection Methods%一种支持自动化选波的地震波数据库系统

    Institute of Scientific and Technical Information of China (English)

    徐亚军; 王朝坤; 魏冬梅; 施炜; 潘鹏

    2011-01-01

    With many earthquakes happening in recent years, the seismic performance of building structures is more and more important. How to select earthquake ground motion records for testing buildings is becoming much more necessary. Although the Occident has set up some earthquake ground motion database systems for researches of seismic performance of building structures, these database systems are not able to cover characteristics of earthquake ground motions of our country, nor offer record election methods meeting requirements of our engineering designs, let alone the automatic record selection method. Therefore, it is much necessary to develop an earthquake ground motion database system and scientific and reasonable selection methods of our country immediately. This paper presents an earthquake ground motion database system, which collects a lot of earthquake ground motion records. And the system supports two kinds of earthquake ground motion record selection methods: the conditional ground motion selection method and the severest ground motion selection method. Many experiments prove that the efficiency and effects of earthquake ground motion record selection methods can both meet users' requirements.%近年来地震频发,建筑结构的抗震性越来越被人们所重视,因此如何选取需要的地震波来检测建筑结构变得非常重要.虽然欧美等国家已经建立了一些用于结构抗震研究的地震波数据库系统,但是这些数据库系统均未能涵盖能反映我国地震动特征的地震波,也未能提供满足我国工程设计的选波方法,更没有实现自动化选波,所以迅速开发我国自己的地震波数据库系统和研究科学合理的选波方法显得十分必要.设计并实现了一种可以自动化选波的地震波数据库系统,该系统收集了许多具有代表性和权威性的地震波,并支持条件选波和最不利选波.大量的实验表明,该地震波数据库系统的选波效率、选波

  18. 中医病案数据库元数据方案的设计%Design of meta-data scheme for medical records database of traditional Chinese medicine

    Institute of Scientific and Technical Information of China (English)

    田瑞; 马路

    2014-01-01

    目的:设计一套适用于中医病案数据库的元数据方案,以满足中医病案数据库保存和分析的需要。方法:经过资源分析、文献及网络调研、用户调查后,采用 MARC 元数据设计了中医病案数据库元数据方案。结果:中医病案数据库元数据方案由17个字段组成,分为9个部分。结论:使用 MARC 作为中医病案数据库的元数据标准,初步达到了组织和描述中医病案资源的目的,但在元数据标准的选择、著录的标准化方面仍需讨论。%Objective To design the meta-data scheme for medical records database of traditional Chinese medi-cine.Methods The meta-data scheme for medical records database of traditional Chinese medicine was designed u-sing the MARC meta-data according to the analysis of resources, investigation of literature, networks and users. Results The meta-data scheme for medical records database of traditional Chinese medicine is consisted of 17 fields and 9 parts.Conclusion MARC, as the mata-data criteria, can organize and describe the medical records of tradi-tional Chinese medicine.However, the standards for selection and description of meta-data criteria need to be fur-ther studied.

  19. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  20. Student Records

    Science.gov (United States)

    Fields, Cheryl

    2005-01-01

    Another topic involving privacy has attracted considerable attention in recent months--the "student unit record" issue. The U.S. Department of Education concluded in March that it would be feasible to help address lawmakers' concerns about accountability in higher education by constructing a database capable of tracking students from institution…

  1. Classical databases and knowledge organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2015-01-01

    This paper considers classical bibliographic databases based on the Boolean retrieval model (such as MEDLINE and PsycInfo). This model is challenged by modern search engines and information retrieval (IR) researchers, who often consider Boolean retrieval a less efficient approach. The paper...... examines this claim and argues for the continued value of Boolean systems, which suggests two further considerations: (1) the important role of human expertise in searching (expert searchers and “information literate” users) and (2) the role of library and information science and knowledge organization (KO......) in the design and use of classical databases. An underlying issue is the kind of retrieval system for which one should aim. Warner’s (2010) differentiation between the computer science traditions and an older library-oriented tradition seems important; the former aim to transform queries automatically...

  2. Relative accuracy and availability of an Irish National Database of dispensed medication as a source of medication history information: observational study and retrospective record analysis.

    LENUS (Irish Health Repository)

    Grimes, T

    2013-01-27

    WHAT IS KNOWN AND OBJECTIVE: The medication reconciliation process begins by identifying which medicines a patient used before presentation to hospital. This is time-consuming, labour intensive and may involve interruption of clinicians. We sought to identify the availability and accuracy of data held in a national dispensing database, relative to other sources of medication history information. METHODS: For patients admitted to two acute hospitals in Ireland, a Gold Standard Pre-Admission Medication List (GSPAML) was identified and corroborated with the patient or carer. The GSPAML was compared for accuracy and availability to PAMLs from other sources, including the Health Service Executive Primary Care Reimbursement Scheme (HSE-PCRS) dispensing database. RESULTS: Some 1111 medication were assessed for 97 patients, who were median age 74 years (range 18-92 years), median four co-morbidities (range 1-9), used median 10 medications (range 3-25) and half (52%) were male. The HSE-PCRS PAML was the most accurate source compared to lists provided by the general practitioner, community pharmacist or cited in previous hospital documentation: the list agreed for 74% of the medications the patients actually used, representing complete agreement for all medications in 17% of patients. It was equally contemporaneous to other sources, but was less reliable for male than female patients, those using increasing numbers of medications and those using one or more item that was not reimbursable by the HSE. WHAT IS NEW AND CONCLUSION: The HSE-PCRS database is a relatively accurate, available and contemporaneous source of medication history information and could support acute hospital medication reconciliation.

  3. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  4. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  5. Onzekere databases

    NARCIS (Netherlands)

    van Keulen, Maurice

    Een recente ontwikkeling in het databaseonderzoek betret zogenaamde 'onzekere databases'. Dit artikel beschrijft wat onzekere databases zijn, hoe ze gebruikt kunnen worden en welke toepassingen met name voordeel zouden kunnen hebben van deze technologie.

  6. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  7. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  8. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  9. Aspects of record linkage

    NARCIS (Netherlands)

    Schraagen, Marijn Paul

    2014-01-01

    This thesis is an exploration of the subject of historical record linkage. The general goal of historical record linkage is to discover relations between historical entities in a database, for any specific definition of relation, entity and database. Although this task originates from historical

  10. Drivers of Holocene sea-level change - using a global database of relative sea-level records from the Northern and Southern Hemisphere

    Science.gov (United States)

    Horton, Benjamin; Khan, Nicole; Ashe, Erica; Kopp, Robert

    2016-04-01

    Many factors give rise to relative sea-level (RSL) changes that are far from globally uniform. For example, spatially variable sea-level responses arise because of the exchange of mass between ice sheets and oceans. Gravitational, flexural, and rotational processes generate a distinct spatial pattern - or "fingerprint" - of sea-level change associated with each shrinking land ice mass. As a land ice mass shrinks, sea-level rise is greater in areas geographically distal to the ice mass than in areas proximal to it, in large part because the gravitational attraction between the ice mass and the ocean is reduced. Thus, the U.S. mid-Atlantic coastline experiences about 50% of the global average sea-level-rise due to Greenland Ice Sheet melt, but about 120% of the global average due to West Antarctic Ice Sheet melt. Separating the Greenland and Antarctic ice sheet contributions during the past 7,000 years requires analysis of sea-level changes from sites in the northern and southern hemisphere. Accordingly we present a global sea-level database for the Holocene to which we apply a hierarchical statistical model to: (1) estimate the Global Mean Sea Level Signal; (2) quantify rates of change; (3) compare rates of change among sites, including full quantification of the uncertainty in their differences; and (4) test hypotheses about the sources of meltwater through their sea-level fingerprints.

  11. Collecting psychosocial "vital signs" in electronic health records: Why now? What are they? What's new for psychology?

    Science.gov (United States)

    Matthews, Karen A; Adler, Nancy E; Forrest, Christopher B; Stead, William W

    2016-09-01

    Social, psychological, and behavioral factors are recognized as key contributors to health, but they are rarely measured in a systematic way in health care settings. Electronic health records (EHRs) can be used in these settings to routinely collect a standardized set of social, psychological, and behavioral determinants of health. The expanded use of EHRs provides opportunities to improve individual and population health, and offers new ways for the psychological community to engage in health promotion and disease prevention efforts. This article addresses 3 issues. First, it discusses what led to current efforts to include measures of psychosocial and behavioral determinants of health in EHRs. Second, it presents recommendations of an Institute of Medicine committee regarding inclusion in EHRS of a panel of measures that meet a priori criteria. Third, it identifies new opportunities and challenges these recommendations present for psychologists in practice and research. (PsycINFO Database Record

  12. Dutch Vegetation Database (LVD)

    NARCIS (Netherlands)

    Hennekens, S.M.

    2011-01-01

    The Dutch Vegetation Database (LVD) hosts information on all plant communities in the Netherlands. This substantial archive consists of over 600.000 recent and historic vegetation descriptions. The data provide information on more than 85 years of vegetation recording in various habitats covering te

  13. Quality of recording of diabetes in the UK: how does the GP's method of coding clinical data affect incidence estimates? Cross-sectional study using the CPRD database

    Science.gov (United States)

    Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim

    2017-01-01

    Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831

  14. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  15. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  16. Chemical Kinetics Database

    Science.gov (United States)

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  17. Searching for religion and mental health studies required health, social science, and grey literature databases.

    Science.gov (United States)

    Wright, Judy M; Cottrell, David J; Mir, Ghazala

    2014-07-01

    To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Table manipulation in simplicial databases

    CERN Document Server

    Spivak, David I

    2010-01-01

    In \\cite{Spi}, we developed a category of databases in which the schema of a database is represented as a simplicial set. Each simplex corresponds to a table in the database. There, our main concern was to find a categorical formulation of databases; the simplicial nature of the schemas was to some degree unexpected and unexploited. In the present note, we show how to use this geometric formulation effectively on a computer. If we think of each simplex as a polygonal tile, we can imagine assembling custom databases by mixing and matching tiles. Queries on this database can be performed by drawing paths through the resulting tile formations, selecting records at the start-point of this path and retrieving corresponding records at its end-point.

  19. Genome databases

    Energy Technology Data Exchange (ETDEWEB)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  20. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  1. Biological Sample Monitoring Database (BSMDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Biological Sample Monitoring Database System (BSMDBS) was developed for the Northeast Fisheries Regional Office and Science Center (NER/NEFSC) to record and...

  2. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  3. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  4. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  5. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie;

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... for gynecological cancer. STUDY POPULATION: DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. MAIN VARIABLES: DGCD data are organized within separate...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  6. Biological Databases

    Directory of Open Access Journals (Sweden)

    Kaviena Baskaran

    2013-12-01

    Full Text Available Biology has entered a new era in distributing information based on database and this collection of database become primary in publishing information. This data publishing is done through Internet Gopher where information resources easy and affordable offered by powerful research tools. The more important thing now is the development of high quality and professionally operated electronic data publishing sites. To enhance the service and appropriate editorial and policies for electronic data publishing has been established and editors of article shoulder the responsibility.

  7. Austrian Social Security Database

    OpenAIRE

    Zweimüller, Josef; Winter-Ebmer, Rudolf; Lalive, Rafael; Kuhn, Andreas; Wuellrich, Jean-Philippe; Ruf, Oliver; Büchi, Simon

    2009-01-01

    The Austrian Social Security Database (ASSD) is a matched firm-worker data set, which records the labor market history of almost 11 million individuals from January 1972 to April 2007. Moreover, more than 2.2 million firms can be identified. The individual labor market histories are described in the follow- ing dimensions: very detailed daily labor market states and yearly earnings at the firm-worker level, together with a limited set of demographic characteris- tics. Additionally the ASSD pr...

  8. 47 CFR 15.713 - TV bands database.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database. 15.713 Section 15.713... TV bands database. (a) Purpose. The TV bands database serves the following functions: (1) To... databases. (b) Information in the TV bands database. (1) Facilities already recorded in Commission...

  9. Interpolated testing influences focused attention and improves integration of information during a video-recorded lecture.

    Science.gov (United States)

    Jing, Helen G; Szpunar, Karl K; Schacter, Daniel L

    2016-09-01

    Although learning through a computer interface has become increasingly common, little is known about how to best structure video-recorded lectures to optimize learning. In 2 experiments, we examine changes in focused attention and the ability for students to integrate knowledge learned during a 40-min video-recorded lecture. In Experiment 1, we demonstrate that interpolating a lecture with memory tests (tested group), compared to studying the lecture material for the same amount of time (restudy group), improves overall learning and boosts integration of related information learned both within individual lecture segments and across the entire lecture. Although mind wandering rates between the tested and restudy groups did not differ, mind wandering was more detrimental for final test performance in the restudy group than in the tested group. In Experiment 2, we replicate the findings of Experiment 1, and additionally show that interpolated tests influence the types of thoughts that participants report during the lecture. While the tested group reported more lecture-related thoughts, the restudy group reported more lecture-unrelated thoughts; furthermore, lecture-related thoughts were positively related to final test performance, whereas lecture-unrelated thoughts were negatively related to final test performance. Implications for the use of interpolated testing in video-recorded lectures are discussed. (PsycINFO Database Record

  10. Database of recent tsunami deposits

    Science.gov (United States)

    Peters, Robert; Jaffe, Bruce E.

    2010-01-01

    This report describes a database of sedimentary characteristics of tsunami deposits derived from published accounts of tsunami deposit investigations conducted shortly after the occurrence of a tsunami. The database contains 228 entries, each entry containing data from up to 71 categories. It includes data from 51 publications covering 15 tsunamis distributed between 16 countries. The database encompasses a wide range of depositional settings including tropical islands, beaches, coastal plains, river banks, agricultural fields, and urban environments. It includes data from both local tsunamis and teletsunamis. The data are valuable for interpreting prehistorical, historical, and modern tsunami deposits, and for the development of criteria to identify tsunami deposits in the geologic record.

  11. Enhancing OPAC Records for Discovery

    Directory of Open Access Journals (Sweden)

    Patrick Griffis

    2009-09-01

    Full Text Available This article proposes adding keywords and descriptors to the catalog records of electronic databases and media items to enhance their discovery. The authors contend that subject liaisons can add value to OPAC records and enhance discovery of electronic databases and media items by providing searchable keywords and resource descriptions. The authors provide an examination of OPAC records at their own library, which illustrates the disparity of useful keywords and descriptions within the notes field for media item records versus electronic database records. The authors outline methods for identifying useful keywords for indexing OPAC records of electronic databases. Also included is an analysis of the advantages of using Encore’s Community Tag and Community Review features to allow subject liaisons to work directly in the catalog instead of collaborating with cataloging staff

  12. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  13. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  14. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  15. Database tomography for commercial application

    Science.gov (United States)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  16. The RIKEN integrated database of mammals.

    Science.gov (United States)

    Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

    2011-01-01

    The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.

  17. Avoidance of neuromuscular blocking agents may increase the risk of difficult tracheal intubation: a cohort study of 103,812 consecutive adult patients recorded in the Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Lundstrøm, L H; Møller, A M; Rosenstock, C;

    2009-01-01

    by direct laryngoscopy was retrieved from the Danish Anaesthesia Database. We used an intubation score based upon the number of attempts, change from direct laryngoscopy to a more advanced technique, or intubation by a different operator. We retrieved data on age, sex, ASA physical status classification...

  18. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  19. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  20. NoSQL Databases

    OpenAIRE

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  1. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  2. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  3. The Danish Anaesthesia Database

    Directory of Open Access Journals (Sweden)

    Antonsen K

    2016-10-01

    Full Text Available Kristian Antonsen,1 Charlotte Vallentin Rosenstock,2 Lars Hyldborg Lundstrøm2 1Board of Directors, Copenhagen University Hospital, Bispebjerg and Frederiksberg Hospital, Capital Region of Denmark, Denmark; 2Department of Anesthesiology, Copenhagen University Hospital, Nordsjællands Hospital-Hillerød, Capital Region of Denmark, Denmark Aim of database: The aim of the Danish Anaesthesia Database (DAD is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. Study population: The DAD was founded in 2004 as a part of Danish Clinical Registries (Regionernes Kliniske Kvalitetsudviklings Program [RKKP]. Patients undergoing general anesthesia, regional anesthesia with or without combined general anesthesia as well as patients under sedation are registered. Data are retrieved from public and private anesthesia clinics, single-centers as well as multihospital corporations across Denmark. In 2014 a total of 278,679 unique entries representing a national coverage of ~70% were recorded, data completeness is steadily increasing. Main variable: Records are aggregated for determining 13 defined quality indicators and eleven defined complications all covering the anesthetic process from the preoperative assessment through anesthesia and surgery until the end of the postoperative recovery period. Descriptive data: Registered variables include patients' individual social security number (assigned to all Danes and both direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment. In addition, an annual DAD report is a benchmark for departments nationwide. Conclusion: The DAD is covering the

  4. A critical review of the contribution of eye movement recordings to the neuropsychology of obsessive compulsive disorder.

    Science.gov (United States)

    Jaafari, N; Rigalleau, F; Rachid, F; Delamillieure, P; Millet, B; Olié, J-P; Gil, R; Rotge, J-Y; Vibert, N

    2011-08-01

    Dysfunctions of saccadic and/or smooth pursuit eye movements have been proposed as markers of obsessive compulsive disorder (OCD), but experimental results are inconsistent. The aim of this paper was to review the literature on eye movement dysfunctions in OCD to assess whether or not saccades or smooth pursuit may be used to diagnose and characterize OCD. Literature was searched using PubMed, ISI Web of Knowledge, and PsycINFO databases for all studies reporting eye movements in adult patients suffering from OCD. Thirty-three articles were found. As expected, eye movements of the patients with OCD were mostly assessed with simple oculomotor paradigms involving saccadic and/or smooth pursuit control. In contrast to patients with schizophrenia, however, patients with OCD only displayed rather unspecific deficits, namely slight smooth pursuit impairments and longer response latencies on antisaccade tasks. There was no relationship between these deficits and the severity of patients' symptoms. Interestingly, eye movements of the patients with OCD were almost never recorded during more complex cognitive tasks. As in schizophrenia and autism, eye movement recordings during more complex tasks might help to better characterize the cognitive deficits associated with OCD. Such recordings may reveal specific OCD-related deficits that could be used as reliable diagnostic and/or classification tools. © 2011 John Wiley & Sons A/S.

  5. Records Management

    Data.gov (United States)

    U.S. Environmental Protection Agency — All Federal Agencies are required to prescribe an appropriate records maintenance program so that complete records are filed or otherwise preserved, records can be...

  6. De-identifying an EHR Database

    DEFF Research Database (Denmark)

    Lauesen, Søren; Pantazos, Kostas; Lippert, Søren

    2011-01-01

    -identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect...... lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree...... of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database....

  7. The Human Communication Research Centre dialogue database.

    Science.gov (United States)

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  8. De-identifying an EHR Database

    DEFF Research Database (Denmark)

    Lauesen, Søren; Pantazos, Kostas; Lippert, Søren

    2011-01-01

    -identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect...... lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree...... of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database....

  9. The Majorana Parts Tracking Database

    CERN Document Server

    Abgrall, N; Avignone, F T; Bertrand, F E; Brudanin, V; Busch, M; Byram, D; Caldwell, A S; Chan, Y-D; Christofferson, C D; Combs, D C; Cuesta, C; Detwiler, J A; Doe, P J; Efremenko, Yu; Egorov, V; Ejiri, H; Elliott, S R; Esterline, J; Fast, J E; Finnerty, P; Fraenkle, F M; Galindo-Uribarri, A; Giovanetti, G K; Goett, J; Green, M P; Gruszko, J; Guiseppe, V E; Gusev, K; Hallin, A L; Hazama, R; Hegai, A; Henning, R; Hoppe, E W; Howard, S; Howe, M A; Keeter, K J; Kidd, M F; Kochetov, O; Kouzes, R T; LaFerriere, B D; Leon, J Diaz; Leviner, L E; Loach, J C; MacMullin, J; Martin, R D; Meijer, S J; Mertens, S; Miller, M L; Mizouni, L; Nomachi, M; Orrell, J L; O'Shaughnessy, C; Overman, N R; Petersburg, R; Phillips, D G; Poon, A W P; Pushkin, K; Radford, D C; Rager, J; Rielage, K; Robertson, R G H; Romero-Romero, E; Ronquest, M C; Shanks, B; Shima, T; Shirchenko, M; Snavely, K J; Snyder, N; Soin, A; Suriano, A M; Tedeschi, D; Thompson, J; Timkin, V; Tornow, W; Trimble, J E; Varner, R L; Vasilyev, S; Vetter, K; Vorren, K; White, B R; Wilkerson, J F; Wiseman, C; Xu, W; Yakushev, E; Young, A R; Yu, C -H; Zhitnikov, I

    2015-01-01

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of $^{76}$Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  10. Danish Colorectal Cancer Group Database

    Directory of Open Access Journals (Sweden)

    Ingeholm P

    2016-10-01

    Full Text Available Peter Ingeholm,1,2 Ismail Gögenur,1,3 Lene H Iversen1,4 1Danish Colorectal Cancer Group Database, Copenhagen, 2Department of Pathology, Herlev University Hospital, Herlev, 3Department of Surgery, Roskilde University Hospital, Roskilde, 4Department of Surgery P, Aarhus University Hospital, Aarhus C, Denmark Aim of database: The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. Study population: All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. Main variables: The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. Descriptive data: The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001–2003 to <2% since 2013. Conclusion: The database is a national population-based clinical database with high patient and data completeness for the perioperative period

  11. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  12. OCLC's Database Conversion: A User's Perspective

    Directory of Open Access Journals (Sweden)

    Arnold Wajenberg

    1981-09-01

    Full Text Available This article describes the experience of a large academic library with headings in the OCLC database that have been converted to AACR2 form. It also considers the use of LC authority records in the database. Specific problems are discussed, including some resulting from LC practices. Nevertheless, the presence of the authority records, and especially the conversionof about 40 percent of the headings in the bibliographic file, has been of great benefit to the library, significantly speeding up the cataloging operation. An appendix contains guidelines for the cataloging staff of theUniversity of Illinois, Urbana-Champaign in the interpretation and use of LC authority records and converted headings.

  13. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  14. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  15. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  16. Renewal Strings for Cleaning Astronomical Databases

    OpenAIRE

    Storkey, Amos J.; Hambly, Nigel C.; Williams, Christopher K. I.; Mann, Robert G.

    2014-01-01

    Large astronomical databases obtained from sky surveys such as the SuperCOSMOS Sky Surveys (SSS) invariably suffer from a small number of spurious records coming from artefactual effects of the telescope, satellites and junk objects in orbit around earth and physical defects on the photographic plate or CCD. Though relatively small in number these spurious records present a significant problem in many situations where they can become a large proportion of the records potentially of interest t...

  17. Databases: Computerized Resource Retrieval Systems. Inservice Series No. 5.

    Science.gov (United States)

    Wilson, Mary Alice

    This document defines and describes electronic databases and provides guidance for organizing a useful database and for selecting hardware and software. Alternatives such as using larger machines are discussed, as are the computer skills necessary to use an electronic database and the use of the computer in the classroom. Files, records, and…

  18. FINDbase: A worldwide database for genetic variation allele frequencies updated

    NARCIS (Netherlands)

    M. Georgitsi (Marianthi); E. Viennas (Emmanouil); D.I. Antoniou (Dimitris I.); V. Gkantouna (Vassiliki); S. van Baal (Sjozef); E.F. Petricoin (Emanuel F.); K. Poulas (Konstantinos); G. Tzimas (Giannis); G.P. Patrinos (George)

    2011-01-01

    textabstractFrequency of INherited Disorders database (FIND base; http://www.findbase. org) records frequencies of causative genetic variations worldwide. Database records include the population and ethnic group or geographical region, the disorder name and the related gene, accompanied by links to

  19. Revisions to Contributed Cataloging in a Cooperative Cataloging Database

    Directory of Open Access Journals (Sweden)

    Judith Hudson

    1981-06-01

    Full Text Available OCLC is the largest bibliographic utility in the United States. One of its greates tassets is its computerized database o fstandardized cataloging information. The database, which is built on the principle of shared cataloging, consists of cataloging records input from Library of Congress MARC tapes and records contributed by member libraries.

  20. Manual for cataloging and indexing documents for database acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, S.R.; Phillips, S.L.; Perra, J.J.

    1978-07-01

    The descriptive cataloging and subject indexing rules and methodology needed to process bibliographic information for GRID database storage are documented. Data elements which may appear in a bibliographic record are tabulated. Examples of coded data entry forms are included in an appendix. Examples are given of unit records in the database corresponding to one bibliographic reference. (MHR)

  1. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  2. FORMIDABEL: The Belgian Ants Database.

    Science.gov (United States)

    Brosens, Dimitri; Vankerkhoven, François; Ignace, David; Wegnez, Philippe; Noé, Nicolas; Heughebaert, André; Bortels, Jeannine; Dekoninck, Wouter

    2013-01-01

    FORMIDABEL is a database of Belgian Ants containing more than 27.000 occurrence records. These records originate from collections, field sampling and literature. The database gives information on 76 native and 9 introduced ant species found in Belgium. The collection records originated mainly from the ants collection in Royal Belgian Institute of Natural Sciences (RBINS), the 'Gaspar' Ants collection in Gembloux and the zoological collection of the University of Liège (ULG). The oldest occurrences date back from May 1866, the most recent refer to August 2012. FORMIDABEL is a work in progress and the database is updated twice a year. THE LATEST VERSION OF THE DATASET IS PUBLICLY AND FREELY ACCESSIBLE THROUGH THIS URL: http://ipt.biodiversity.be/resource.do?r=formidabel. The dataset is also retrievable via the GBIF data portal through this link: http://data.gbif.org/datasets/resource/14697 A dedicated geo-portal, developed by the Belgian Biodiversity Platform is accessible at: http://www.formicidae-atlas.be FORMIDABEL is a joint cooperation of the Flemish ants working group "Polyergus" (http://formicidae.be) and the Wallonian ants working group "FourmisWalBru" (http://fourmiswalbru.be). The original database was created in 2002 in the context of the preliminary red data book of Flemish Ants (Dekoninck et al. 2003). Later, in 2005, data from the Southern part of Belgium; Wallonia and Brussels were added. In 2012 this dataset was again updated for the creation of the first Belgian Ants Atlas (Figure 1) (Dekoninck et al. 2012). The main purpose of this atlas was to generate maps for all outdoor-living ant species in Belgium using an overlay of the standard Belgian ecoregions. By using this overlay for most species, we can discern a clear and often restricted distribution pattern in Belgium, mainly based on vegetation and soil types.

  3. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  4. Native Health Research Database

    Science.gov (United States)

    ... APP WITH JAVASCRIPT TURNED OFF. THE NATIVE HEALTH DATABASE REQUIRES JAVASCRIPT IN ORDER TO FUNCTION. PLEASE ENTER ... To learn more about searching the Native Health Database, click here. Keywords Title Author Source of Publication ...

  5. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  6. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  7. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  8. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  9. Future database machine architectures

    OpenAIRE

    Hsiao, David K.

    1984-01-01

    There are many software database management systems available on many general-purpose computers ranging from micros to super-mainframes. Database machines as backened computers can offload the database management work from the mainframe so that we can retain the same mainframe longer. However, the database backend must also demonstrate lower cost, higher performance, and newer functionality. Some of the fundamental architecture issues in the design of high-performance and great-capacity datab...

  10. MPlus Database system

    Energy Technology Data Exchange (ETDEWEB)

    1989-01-20

    The MPlus Database program was developed to keep track of mail received. This system was developed by TRESP for the Department of Energy/Oak Ridge Operations. The MPlus Database program is a PC application, written in dBase III+'' and compiled with Clipper'' into an executable file. The files you need to run the MPLus Database program can be installed on a Bernoulli, or a hard drive. This paper discusses the use of this database.

  11. DEVELOPING MULTITHREADED DATABASE APPLICATION USING JAVA TOOLS AND ORACLE DATABASE MANAGEMENT SYSTEM IN INTRANET ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Raied Salman

    2015-11-01

    Full Text Available In many business organizations, database applications are designed and implemented using various DBMS and Programming Languages. These applications are used to maintain databases for the organizations. The organization departments can be located at different locations and can be connected by intranet environment. In such environment maintenance of database records become an assignment of complexity which needs to be resolved. In this paper an intranet application is designed and implemented using Object-Oriented Programming Language Java and Object-Relational Database Management System Oracle in multithreaded Operating System environment.

  12. CTD_DATABASE - Cascadia tsunami deposit database

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have...

  13. RPG: the Ribosomal Protein Gene database

    OpenAIRE

    Nakao, Akihiro; Yoshihama, Maki; Kenmochi, Naoya

    2004-01-01

    RPG (http://ribosome.miyazaki-med.ac.jp/) is a new database that provides detailed information about ribosomal protein (RP) genes. It contains data from humans and other organisms, including Drosophila melanogaster, Caenorhabditis elegans, Saccharo myces cerevisiae, Methanococcus jannaschii and Escherichia coli. Users can search the database by gene name and organism. Each record includes sequences (genomic, cDNA and amino acid sequences), intron/exon structures, genomic locations and informa...

  14. Database Citation in Full Text Biomedical Articles

    OpenAIRE

    Şenay Kafkas; Jee-Hyub Kim; Johanna R. McEntyre

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleoti...

  15. CAR2 - Czech Database of Car Speech

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1999-12-01

    Full Text Available This paper presents new Czech language two-channel (stereo speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us Trypanosomes Database... Database Description General information of database Database name Trypanosomes Database...rmation and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonomy Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Na...me: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database is a database providing th

  17. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us PLACE Database... Description General information of database Database name A Database of Plant Cis-acting Regu...araki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database classification Plant database...s Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database description PLACE is a database of... motifs found in plant cis-acting regulatory DNA elements based on previously pub

  18. Annotation and retrieval in protein interaction databases

    Science.gov (United States)

    Cannataro, Mario; Hiram Guzzi, Pietro; Veltri, Pierangelo

    2014-06-01

    Biological databases have been developed with a special focus on the efficient retrieval of single records or the efficient computation of specialized bioinformatics algorithms against the overall database, such as in sequence alignment. The continuos production of biological knowledge spread on several biological databases and ontologies, such as Gene Ontology, and the availability of efficient techniques to handle such knowledge, such as annotation and semantic similarity measures, enable the development on novel bioinformatics applications that explicitly use and integrate such knowledge. After introducing the annotation process and the main semantic similarity measures, this paper shows how annotations and semantic similarity can be exploited to improve the extraction and analysis of biologically relevant data from protein interaction databases. As case studies, the paper presents two novel software tools, OntoPIN and CytoSeVis, both based on the use of Gene Ontology annotations, for the advanced querying of protein interaction databases and for the enhanced visualization of protein interaction networks.

  19. An audiovisual database of English speech sounds

    Science.gov (United States)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  20. Modification Semantics in Now-Relative Databases

    DEFF Research Database (Denmark)

    Torp, Kristian; Jensen, Christian Søndergaard; Snodgrass, R. T.

    2004-01-01

    Most real-world databases record time-varying information. In such databases, the notion of ??the current time,?? or NOW, occurs naturally and prominently. For example, when capturing the past states of a relation using begin and end time columns, tuples that are part of the current state have some...... past time as their begin time and NOW as their end time. While the semantics of such variable databases has been described in detail and is well understood, the modification of variable databases remains unexplored. This paper defines the semantics of modifications involving the variable NOW. More...... specifically,  the problems with modifications in the presence of NOW are explored, illustrating that the main problems are with modifications of tuples that reach into the future. The paper defines the semantics of modifications?including insertions, deletions, and updates?of databases without NOW, with NOW...

  1. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  2. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  3. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  4. The NCBI Taxonomy database.

    Science.gov (United States)

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  5. A Noisy 10GB Provenance Database

    Energy Technology Data Exchange (ETDEWEB)

    Cheah, You-Wei; Plale, Beth; Kendall-Morwick, Joey; Leake, David; Ramakrishnan, Lavanya

    2011-06-06

    Provenance of scientific data is a key piece of the metadata record for the data's ongoing discovery and reuse. Provenance collection systems capture provenance on the fly, however, the protocol between application and provenance tool may not be reliable. Consequently, the provenance record can be partial, partitioned, and simply inaccurate. We use a workflow emulator that models faults to construct a large 10GB database of provenance that we know is noisy (that is, has errors). We discuss the process of generating the provenance database, and show early results on the kinds of provenance analysis enabled by the large provenance.

  6. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org.

  7. Keeping electronic records secure.

    Science.gov (United States)

    Easton, David

    2013-10-01

    Are electronic engineering maintenance records relating to the hospital estate or a medical device as important as electronic patient records? Computer maintenance management systems (CMMS) are increasingly being used to manage all-round maintenance activities. However, the accuracy of the data held on them, and a level of security that prevents tampering with records, or other unauthorised changes to them to 'cover' poor practice, are both essential, so that, should an individual be injured or killed on hospital grounds, and a law suit follow, the estates team can be confident that it has accurate data to prove it has fulfilled its duty of care. Here David Easton MSc CEng FIHEEM MIET, director of Zener Engineering Services, and chair of IHEEM's Medical Devices Advisory Group, discusses the issues around maintenance databases, and the security and integrity of maintenance data.

  8. UPDATED OKLAHOMA OZARK FLORA: A Checklist for the Vascular Flora of Ozark Plateau in Oklahoma based on the work of C.S. Wallis and records from the Oklahoma Vascular Plants Database

    Directory of Open Access Journals (Sweden)

    Bruce W. Hoagland

    2007-12-01

    Full Text Available Charles Wallis’ 1959 dissertation “Vascular Plants of the Oklahoma Ozarks” is one of the most important florisitic works for state botanists and conservationists. Although a number of local and county floras for Oklahoma have been published, only Wallis and C. T. Eskew (1937 have completed regional studies. Wallis’s interest in the Ozark flora began with his 1953 masters thesis, “The Spermophyta of Cherokee County Oklahoma,” and subsequent studies in collaboration with U. T. Waterfall at Oklahoma A&M (Wallis 1957; Wallis and Waterfall 1953; Waterfall and Wallis 1962, 1963. This paper has two objectives, to update the taxonomy of Wallis’s Ozark list (WOL and to provide a current Ozark checklist (OC by inclusion of records that did not appear in WOL. Since several decades have passed since the WOL was completed, there have been many changes in the taxonomy of the plants listed. These updates will enhance the utility of the WOL for modern users and not detract from Wallis’s original work.

  9. Robert Recorde

    CERN Document Server

    Williams, Jack

    2011-01-01

    The 16th-Century intellectual Robert Recorde is chiefly remembered for introducing the equals sign into algebra, yet the greater significance and broader scope of his work is often overlooked. This book presents an authoritative and in-depth analysis of the man, his achievements and his historical importance. This scholarly yet accessible work examines the latest evidence on all aspects of Recorde's life, throwing new light on a character deserving of greater recognition. Topics and features: presents a concise chronology of Recorde's life; examines his published works; describes Recorde's pro

  10. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27... Arabidopsis Phenome Database English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  11. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Upda...te History of This Database Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  12. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RMG Database... Description General information of database Database name RMG Alternative name Rice Mitochondri...ational Institute of Agrobiological Sciences E-mail : Database classification Nucleotide Sequence Databases ...Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database description This database co...e of rice mitochondrial genome and information on the analysis results. Features and manner of utilization of database

  13. One approach to design of speech emotion database

    Science.gov (United States)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  14. The HISTMAG database: combining historical, archaeomagnetic and volcanic data

    Science.gov (United States)

    Arneitz, Patrick; Leonhardt, Roman; Schnepp, Elisabeth; Heilig, Balázs; Mayrhofer, Franziska; Kovacs, Peter; Hejda, Pavel; Valach, Fridrich; Vadasz, Gergely; Hammerl, Christa; Egli, Ramon; Fabian, Karl; Kompein, Niko

    2017-09-01

    Records of the past geomagnetic field can be divided into two main categories. These are instrumental historical observations on the one hand, and field estimates based on the magnetization acquired by rocks, sediments and archaeological artefacts on the other hand. In this paper, a new database combining historical, archaeomagnetic and volcanic records is presented. HISTMAG is a relational database, implemented in MySQL, and can be accessed via a web-based interface (http://www.conrad-observatory.at/zamg/index.php/data-en/histmag-database). It combines available global historical data compilations covering the last ∼500 yr as well as archaeomagnetic and volcanic data collections from the last 50 000 yr. Furthermore, new historical and archaeomagnetic records, mainly from central Europe, have been acquired. In total, 190 427 records are currently available in the HISTMAG database, whereby the majority is related to historical declination measurements (155 525). The original database structure was complemented by new fields, which allow for a detailed description of the different data types. A user-comment function provides the possibility for a scientific discussion about individual records. Therefore, HISTMAG database supports thorough reliability and uncertainty assessments of the widely different data sets, which are an essential basis for geomagnetic field reconstructions. A database analysis revealed systematic offset for declination records derived from compass roses on historical geographical maps through comparison with other historical records, while maps created for mining activities represent a reliable source.

  15. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  16. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  17. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  18. Data quality evaluation in medical database watermarking.

    Science.gov (United States)

    Franco-Contreras, Javier; Coatrieux, Gouenou; Massari, Philippe; Darmoni, Stefan; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Roux, Christian

    2015-01-01

    The use of watermarking in the protection of medical relational databases requires that the introduced distortion does not hinder records interpretation. In this paper, we present the preliminary results of a watermarked data quality evaluation protocol developed so as to analyze the perception the practitioner has of the watermark. These results show that some attributes are more appropriate for watermarking than others and also that incoherent or unlikely records resulting from careless watermarking are easily identified by an expert.

  19. ITS-90 Thermocouple Database

    Science.gov (United States)

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  20. Searching Databases with Keywords

    Institute of Scientific and Technical Information of China (English)

    Shan Wang; Kun-Long Zhang

    2005-01-01

    Traditionally, SQL query language is used to search the data in databases. However, it is inappropriate for end-users, since it is complex and hard to learn. It is the need of end-user, searching in databases with keywords, like in web search engines. This paper presents a survey of work on keyword search in databases. It also includes a brief introduction to the SEEKER system which has been developed.

  1. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  2. The Majorana Parts Tracking Database

    Energy Technology Data Exchange (ETDEWEB)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O׳Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radiopurity required for this rare decay search.

  3. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  4. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  5. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  6. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  7. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive......, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. DESCRIPTIVE DATA: The DMD is an old research database, but new as a clinical...

  8. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  9. The Relational Database Dictionary

    CERN Document Server

    J, C

    2006-01-01

    Avoid misunderstandings that can affect the design, programming, and use of database systems. Whether you're using Oracle, DB2, SQL Server, MySQL, or PostgreSQL, The Relational Database Dictionary will prevent confusion about the precise meaning of database-related terms (e.g., attribute, 3NF, one-to-many correspondence, predicate, repeating group, join dependency), helping to ensure the success of your database projects. Carefully reviewed for clarity, accuracy, and completeness, this authoritative and comprehensive quick-reference contains more than 600 terms, many with examples, covering i

  10. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  11. Databases for Microbiologists

    Science.gov (United States)

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  12. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  13. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  14. The new Scandinavian Donations and Transfusions database (SCANDAT2)

    DEFF Research Database (Denmark)

    Edgren, Gustaf; Rostgaard, Klaus; Vasan, Senthil K

    2015-01-01

    -creation of SCANDAT with updated, identifiable data. We collected computerized data on blood donations and transfusions from blood banks covering all of Sweden and Denmark. After data cleaning, two structurally identical databases were created and the entire database was linked with nationwide health outcomes...... registers to attain complete follow-up for up to 47 years regarding hospital care, cancer, and death. RESULTS: After removal of erroneous records, the database contained 25,523,334 donation records, 21,318,794 transfusion records, and 3,692,653 unique persons with valid identification, presently followed...

  15. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database...pecifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database...pan is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  16. LHC Databases on the Grid: Achievements and Open Issues

    CERN Document Server

    Vaniachine, A V

    2010-01-01

    To extract physics results from the recorded data, the LHC experiments are using Grid computing infrastructure. The event data processing on the Grid requires scalable access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are critical for the event data reconstruction processing steps and often required for physics analysis. This paper reviews LHC experience with database technologies for the Grid computing. List of topics includes: database integration with Grid computing models of the LHC experiments; choice of database technologies; examples of database interfaces; distributed database applications (data complexity, update frequency, data volumes and access patterns); scalability of database access in the Grid computing environment of the LHC experiments. The review describes areas in which substantial progress was made and remaining open issues.

  17. Copyright in Context: The OCLC Database.

    Science.gov (United States)

    Mason, Marilyn Gell

    1988-01-01

    Discusses topics related to OCLC adoption of guidelines for the use and transfer of OCLC-derived records, including the purpose of OCLC; the legal basis of copyrighting; technological change; compilation copyright; rationale for copyright of the OCLC database; impact on libraries; impact on networks; and relationships between OCLC and libraries. A…

  18. Database Cleanup: Errors in the Catalog.

    Science.gov (United States)

    Trombly, Susan T.

    2001-01-01

    Discusses the need for libraries to clean up their catalog databases to eliminate duplicate entries, update bibliographic or authority records, and correct errors. Reviews pertinent literature and considers methods that include human review as well as software and matching programs. (LRW)

  19. Opportunities and challenges of using diagnostic databases for monitoring livestock diseases in Denmark

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Hisham Beshara Halasa, Tariq; Toft, Nils

    Several databases are being used in Denmark to record information at all stages and levels of modern livestock production. These databases are all developed for different purposes and gather large volumes of routinely collected data. Examples of existing databases for livestock are the Central...... Husbandry Register (CHR), Meat inspection database for cattle and swine, mortality database and movement database. These databases are owned by the Ministry of Food, Agriculture and Fisheries. Other databases, such as the Danish Cattle Database, are owned by the agricultural sector. In addition...... to the technical and political bottlenecks of gathering and combining data from the different databases, the questions remain on the sensitivity and timeliness of data for detecting unexpected animal health events. Thus, it is important to explore changes in data records over time from different databases in order...

  20. A relational database in neurosurgery.

    Science.gov (United States)

    Sicurello, F; Marchetti, M R; Cazzaniga, P

    1995-01-01

    , therapies, result, and hospital course. Medical language is closer to the natural one and presents some abiguities. In order to solve this problem, a classification nomenclature was used for diagnosis definition. DISCHARGE LETTER: the document given to the patient when he is discharged. It extracts data from the previously described modules and contains standard headings. The information stored int he database is structured (e.g., diagnosis, name, surname, etc.) and access to this data takes place when the user wants to search the database, using particular queries where the identifying data of a patient is put as conditions for the research (SELECT age, name WHERE diagnosis="TRAUMA"). Logical operators and relational algebra of the relational DBMS allows more complex queries ((diagnosis="TRAUMA" AND age="19") OR sex="M"). The queries are deterministic, because data management uses a classification nomenclature. Data retrieval takes place through a matching, and the DBMS answers directly to the queries. The information retrieval speed depends upon the kind of system that is used; in our case retrieval time is low because the accesses to disk are few even for big databases. In medicine, clinical records can have a hierarchical structure and/or a relational one. Nevertheless, the hierarchical model presents a disadvantage: it is not very flexible because it is linked to a pre-defined structure; as a matter of fact, the definition of path is established in the beginning and not during the execution. Thus, a better representation of the system at a logical level requries a relational DBMS which exploits the relationships between entities in a vertical and horizontal way. That is why the developers adopted a mixed strategy which exploits the advantages of both models and which is provided by M Technology with SQL language (M/SQL). For the future, it is important to have at one's disposal multimedia technologies, which integrate different kinds of information (alp

  1. Neutrosophic Relational Database Decomposition

    OpenAIRE

    Meena Arora; Ranjit Biswas; Dr. U.S.Pandey

    2011-01-01

    In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  2. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  3. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  4. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...

  5. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004...

  6. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1.

  7. Balkan Vegetation Database

    NARCIS (Netherlands)

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  8. Balkan Vegetation Database

    NARCIS (Netherlands)

    Vassilev, Kiril; Pedashenko, Hristo; Alexandrova, Alexandra; Tashev, Alexandar; Ganeva, Anna; Gavrilova, Anna; Gradevska, Asya; Assenov, Assen; Vitkova, Antonina; Grigorov, Borislav; Gussev, Chavdar; Filipova, Eva; Aneva, Ina; Knollová, Ilona; Nikolov, Ivaylo; Georgiev, Georgi; Gogushev, Georgi; Tinchev, Georgi; Pachedjieva, Kalina; Koev, Koycho; Lyubenova, Mariyana; Dimitrov, Marius; Apostolova-Stoyanova, Nadezhda; Velev, Nikolay; Zhelev, Petar; Glogov, Plamen; Natcheva, Rayna; Tzonev, Rossen; Boch, Steffen; Hennekens, Stephan M.; Georgiev, Stoyan; Stoyanov, Stoyan; Karakiev, Todor; Kalníková, Veronika; Shivarov, Veselin; Russakova, Veska; Vulchev, Vladimir

    2016-01-01

    The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro

  9. Biological Macromolecule Crystallization Database

    Science.gov (United States)

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  10. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  11. An organic database system

    NARCIS (Netherlands)

    M.L. Kersten (Martin); A.P.J.M. Siebes (Arno)

    1999-01-01

    textabstractThe pervasive penetration of database technology may suggest that we have reached the end of the database research era. The contrary is true. Emerging technology, in hardware, software, and connectivity, brings a wealth of opportunities to push technology to a new level of maturity.

  12. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  13. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  14. World Database of Happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    1995-01-01

    textabstractABSTRACT The World Database of Happiness is an ongoing register of research on subjective appreciation of life. Its purpose is to make the wealth of scattered findings accessible, and to create a basis for further meta-analytic studies. The database involves four sections:
    1. Bib

  15. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available Yeast Interacting Proteins Database Database Description General information of database Database name Yeast... Interacting Proteins Database Alternative name - Creator Creator Name: Takashi Ito* Creator Affiliation: Di...-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classification Metabolic and Signaling Pathways - Protei...n-protein interactions Organism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...ive yeast two-hybrid analysis of budding yeast proteins. Features and manner of utilization of database Prot

  16. Phenological Records

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Phenology is the scientific study of periodic biological phenomena, such as flowering, breeding, and migration, in relation to climatic conditions. The few records...

  17. A Foundation for Vacuuming Temporal Databases

    DEFF Research Database (Denmark)

    Skyt, Janne; Jensen, Christian Søndergaard; Mark, L.

    2003-01-01

    A wide range of real-world database applications, including financial and medical applications, are faced with accountability and traceability requirements. These requirements lead to the replacement of the usual update-in-place policy by an append-only policy that retain all previous states...... in the database. This policy result in so-called transaction-time databases which are ever-growing. A variety of physical storage structures and indexing techniques as well as query languages have been proposed for transaction-time databases, but the support for physical removal of data, termed vacuuming, has...... only received little attention. Such vacuuming is called for by, e.g., the laws of many countries and the policies of many businesses. Although necessary, with vacuuming, the database’s perfect recollection of the past may be compromised via, e.g., selective removal of records pertaining to past states...

  18. The Danish Collaborative Bacteraemia Network (DACOBAN) database

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus

    2014-01-01

    registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In addition, the high number of patients facilitates studies......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32......% of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents...

  19. A Foundation for Vacuuming Temporal Databases

    DEFF Research Database (Denmark)

    Skyt, Janne; Jensen, Christian Søndergaard; Mark, L.

    2003-01-01

    A wide range of real-world database applications, including financial and medical applications, are faced with accountability and traceability requirements. These requirements lead to the replacement of the usual update-in-place policy by an append-only policy that retain all previous states...... in the database. This policy result in so-called transaction-time databases which are ever-growing. A variety of physical storage structures and indexing techniques as well as query languages have been proposed for transaction-time databases, but the support for physical removal of data, termed vacuuming, has...... only received little attention. Such vacuuming is called for by, e.g., the laws of many countries and the policies of many businesses. Although necessary, with vacuuming, the database’s perfect recollection of the past may be compromised via, e.g., selective removal of records pertaining to past states...

  20. Reclamation research database

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    A reclamation research database was compiled to help stakeholders search publications and research related to the reclamation of Alberta's oil sands region. New publications are added to the database by the Cumulative Environmental Management Association (CEMA), a nonprofit association whose mandate is to develop frameworks and guidelines for the management of cumulative environmental effects in the oil sands region. A total of 514 research papers have been compiled in the database to date. Topics include recent research on hydrology, aquatic and terrestrial ecosystems, laboratory studies on biodegradation, and the effects of oil sands processing on micro-organisms. The database includes a wide variety of studies related to reconstructed wetlands as well as the ecological effects of hydrocarbons on phytoplankton and other organisms. The database format included information on research format availability, as well as information related to the author's affiliations. Links to external abstracts were provided where available, as well as details of source information.

  1. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  2. Cascadia Tsunami Deposit Database

    Science.gov (United States)

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  3. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us DGBY Database... Description General information of database Database name DGBY Alternative name Database for G...-12 Kannondai, Tsukuba, Ibaraki 305-8642 Japan Akira Ando TEL: +81-29-838-8066 E-mail: Database classificati...on Microarray Data and other Gene Expression Databases Organism Taxonomy Name: Sa...ccharomyces cerevisiae Taxonomy ID: 4932 Database description Baker's yeast Saccharomyces cerevisiae is an e

  4. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us RPSD Database... Description General information of database Database name RPSD Alternative name Summary inform...n National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Database classification Structure Database...idopsis thaliana Taxonomy ID: 3702 Taxonomy Name: Glycine max Taxonomy ID: 3847 Database description We have...nts such as rice, and have put together the result and related informations. This database contains the basi

  5. Speech Databases of Typical Children and Children with SLI.

    Science.gov (United States)

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children's speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children's speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children's speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing.

  6. Speech Databases of Typical Children and Children with SLI.

    Directory of Open Access Journals (Sweden)

    Pavel Grill

    Full Text Available The extent of research on children's speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children's speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children's speech (recorded in kindergarten and in the first level of elementary school and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital. Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing.

  7. Food Composition Database Format and Structure: A User Focused Approach.

    Science.gov (United States)

    Clancy, Annabel K; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user's regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User's also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user's understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered.

  8. Food Composition Database Format and Structure: A User Focused Approach.

    Directory of Open Access Journals (Sweden)

    Annabel K Clancy

    Full Text Available This study aimed to investigate the needs of Australian food composition database user's regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11 and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User's also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user's understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered.

  9. PADB : Published Association Database

    Directory of Open Access Journals (Sweden)

    Lee Jin-Sung

    2007-09-01

    Full Text Available Abstract Background Although molecular pathway information and the International HapMap Project data can help biomedical researchers to investigate the aetiology of complex diseases more effectively, such information is missing or insufficient in current genetic association databases. In addition, only a few of the environmental risk factors are included as gene-environment interactions, and the risk measures of associations are not indexed in any association databases. Description We have developed a published association database (PADB; http://www.medclue.com/padb that includes both the genetic associations and the environmental risk factors available in PubMed database. Each genetic risk factor is linked to a molecular pathway database and the HapMap database through human gene symbols identified in the abstracts. And the risk measures such as odds ratios or hazard ratios are extracted automatically from the abstracts when available. Thus, users can review the association data sorted by the risk measures, and genetic associations can be grouped by human genes or molecular pathways. The search results can also be saved to tab-delimited text files for further sorting or analysis. Currently, PADB indexes more than 1,500,000 PubMed abstracts that include 3442 human genes, 461 molecular pathways and about 190,000 risk measures ranging from 0.00001 to 4878.9. Conclusion PADB is a unique online database of published associations that will serve as a novel and powerful resource for reviewing and interpreting huge association data of complex human diseases.

  10. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  11. Glycoproteomic and glycomic databases.

    Science.gov (United States)

    Baycin Hizal, Deniz; Wolozny, Daniel; Colao, Joseph; Jacobson, Elena; Tian, Yuan; Krag, Sharon S; Betenbaugh, Michael J; Zhang, Hui

    2014-01-01

    Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.

  12. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  13. Phase Equilibria Diagrams Database

    Science.gov (United States)

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  14. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  15. ALICE Geometry Database

    CERN Document Server

    Santo, J

    1999-01-01

    The ALICE Geometry Database project consists of the development of a set of data structures to store the geometrical information of the ALICE Detector. This Database will be used in Simulation, Reconstruction and Visualisation and will interface with existing CAD systems and Geometrical Modellers.At the present time, we are able to read a complete GEANT3 geometry, to store it in our database and to visualise it. On disk, we store different geometry files in hierarchical fashion, and all the nodes, materials, shapes, configurations and transformations distributed in this tree structure. The present status of the prototype and its future evolution will be presented.

  16. Database machine performance

    Energy Technology Data Exchange (ETDEWEB)

    Cesarini, F.; Salza, S.

    1987-01-01

    This book is devoted to the important problem of database machine performance evaluation. The book presents several methodological proposals and case studies, that have been developed within an international project supported by the European Economic Community on Database Machine Evaluation Techniques and Tools in the Context of the Real Time Processing. The book gives an overall view of the modeling methodologies and the evaluation strategies that can be adopted to analyze the performance of the database machine. Moreover, it includes interesting case studies and an extensive bibliography.

  17. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  18. Plant Genome Duplication Database.

    Science.gov (United States)

    Lee, Tae-Ho; Kim, Junah; Robertson, Jon S; Paterson, Andrew H

    2017-01-01

    Genome duplication, widespread in flowering plants, is a driving force in evolution. Genome alignments between/within genomes facilitate identification of homologous regions and individual genes to investigate evolutionary consequences of genome duplication. PGDD (the Plant Genome Duplication Database), a public web service database, provides intra- or interplant genome alignment information. At present, PGDD contains information for 47 plants whose genome sequences have been released. Here, we describe methods for identification and estimation of dates of genome duplication and speciation by functions of PGDD.The database is freely available at http://chibba.agtec.uga.edu/duplication/.

  19. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  20. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients......, and survival. The results are published annually. CONCLUSION: The Danish Pancreatic Cancer Database has registered data on 2,217 patients with microscopically verified ductal adenocarcinoma of the pancreas. The data have been obtained nationwide over a period of 4 years and 2 months. The completeness...

  1. US Tune Purse Seine Fleet History & Activity Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NMFS SWR has collected historical vessel information on the U.S. tuna cannery baitboat and purse seine fleets for many years. The database's first record of a...

  2. U.S. Geological Survey mineral databases; MRDS and MAS/MILS

    Science.gov (United States)

    McFaul, E.J.; Mason, G.T.; Ferguson, W.B.; Lipin, B.R.

    2000-01-01

    These two CD-ROM's contain the latest version of the Mineral Resources Data System (MRDS) database and the Minerals Availability System/Minerals Industry Location System (MAS/MILS) database for coverage of North America and the world outside North America. The records in the MRDS database each contain almost 200 data fields describing metallic and nonmetallic mineral resources, deposits, and commodities. The records in the MAS/MILS database each contain almost 100 data fields describing mines and mineral processing plans.

  3. Checkpointing and Recovery in Distributed and Database Systems

    Science.gov (United States)

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  4. Development of a Biomedical Database on the Medical Aspects of Chemical Defense

    Science.gov (United States)

    1988-12-01

    databases: Eric Biosis 1981 NTIS Social Scisearch Agricola 79 Psycinfo Chem Ind Notes Federal Index Claims/U.S. Patent Claims/U.S. Patent A Scisearch 84...CAT Enviro Perio Bib Intl Pharm ABS Life Sciences Collection Conf Papers Index PTS A/DM&T Scisearch 81-81 USPSD Scisearch 78-80 CIS Agricola 70-78... Business World Patents Index World Patents Index Remarc 1900-1939 Remarc 1940-1959 Remarc 1960-1969 Remarc 1970+ LC Marc Books in Print Wiley Catalog Online

  5. Geospatial database for heritage building conservation

    Science.gov (United States)

    Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.

    2014-02-01

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.

  6. Using volume holograms to search digital databases

    Science.gov (United States)

    Burr, Geoffrey W.; Maltezos, George; Grawert, Felix; Kobras, Sebastian; Hanssen, Holger; Coufal, Hans J.

    2002-01-01

    Holographic data storage offers the potential for simultaneous search of an entire database by performing multiple optical correlations between stored data pages and a search argument. This content-addressable retrieval produces one analog correlation score for each stored volume hologram. We have previously developed fuzzy encoding techniques for this fast parallel search, and holographically searched a small database with high fidelity. We recently showed that such systems can be configured to produce true inner-products, and proposed an architecture in which massively-parallel searches could be implemented. However, the speed advantage over conventional electronic search provided by parallelism brings with it the possibility of erroneous search results, since these analog correlation scores are subject to various noise sources. We show that the fidelity of such an optical search depends not only on the usual holographic storage signal-to-noise factors (such as readout power, diffraction efficiency, and readout speed), but also on the particular database query being made. In effect, the presence of non-matching database records with nearly the same correlation score as the targeted matching records reduces the speed advantage of the parallel search. Thus for any given fidelity target, the performance improvement offered by a content-addressable holographic storage can vary from query to query even within the same database.

  7. Mining Electronic Healthcare Record Databases to Augment Drug Safety Surveillance

    NARCIS (Netherlands)

    P.M. Coloma (Preciosa)

    2012-01-01

    textabstractIt is perhaps a fundamental truth in medicine that there is no intervention – be it a drug, a medical device or a procedure – that is without risks. Even with the most rigorous eff orts in drug approval and regulation, there is not a drug out there that is 100% safe under all

  8. Mining Electronic Healthcare Record Databases to Augment Drug Safety Surveillance

    NARCIS (Netherlands)

    P.M. Coloma (Preciosa)

    2012-01-01

    textabstractIt is perhaps a fundamental truth in medicine that there is no intervention – be it a drug, a medical device or a procedure – that is without risks. Even with the most rigorous eff orts in drug approval and regulation, there is not a drug out there that is 100% safe under all conditions.

  9. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  10. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  11. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  12. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  13. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  14. The Exoplanet Orbit Database

    CERN Document Server

    Wright, Jason T; Marcy, Geoffrey W; Han, Eunkyu; Feng, Ying; Johnson, John Asher; Howard, Andrew W; Valenti, Jeff A; Anderson, Jay; Piskunov, Nikolai

    2010-01-01

    We present a database of well determined orbital parameters of exoplanets. This database comprises spectroscopic orbital elements measured for 421 planets orbiting 357 stars from radial velocity and transit measurements as reported in the literature. We have also compiled fundamental transit parameters, stellar parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets with robust, well measured orbital parameters reported in peer-reviewed articles. The database is available in a searchable, filterable, and sortable form on the Web at http://exoplanets.org through the Exoplanets Data Explorer Table, and the data can be plotted and explored through the Exoplanets Data Explorer Plotter. We use the Data Explorer to generate publication-ready plots giving three examples of the signatures of exoplanet migration and dynamical evolution: We illustrate the character of the apparent correlation between mass and period in exoplanet orbits, the selection different biase...

  15. National Geochemical Database: Concentrate

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemistry of concentrates from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are from the continental US and...

  16. National Geochemical Database: Soil

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of soil samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are from the continental US...

  17. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment in...

  18. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. DESCRIPTIVE DATA: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients...... have been registered annually in the database. A total of 24,083 inpatients and 29,918 outpatients have been registered. The DDD produces an annual report published on the Internet. CONCLUSION: The DDD can become an important tool for quality improvement and research, when the reporting is more...

  19. Molecular marker databases.

    Science.gov (United States)

    Lai, Kaitao; Lorenc, Michał Tadeusz; Edwards, David

    2015-01-01

    The detection and analysis of genetic variation plays an important role in plant breeding and this role is increasing with the continued development of genome sequencing technologies. Molecular genetic markers are important tools to characterize genetic variation and assist with genomic breeding. Processing and storing the growing abundance of molecular marker data being produced requires the development of specific bioinformatics tools and advanced databases. Molecular marker databases range from species specific through to organism wide and often host a variety of additional related genetic, genomic, or phenotypic information. In this chapter, we will present some of the features of plant molecular genetic marker databases, highlight the various types of marker resources, and predict the potential future direction of crop marker databases.

  20. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  1. Eldercare Locator Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Eldercare Locator is a searchable database that allows a user to search via zip code or city/ state for agencies at the State and local levels that provide...

  2. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  3. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  4. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  5. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  6. Hawaii bibliographic database

    Science.gov (United States)

    Wright, Thomas L.; Takahashi, Taeko Jane

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and s or (if no ) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  7. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  8. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  9. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  10. Disaster Debris Recovery Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 3,500 composting facilities, demolition contractors, haulers, transfer...

  11. National Geochemical Database: Sediment

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Geochemical analysis of sediment samples from the National Geochemical Database. Primarily inorganic elemental concentrations, most samples are of stream sediment...

  12. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  13. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  14. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  15. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  16. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  17. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  18. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  19. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  20. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  1. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  2. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  3. Comparing records with related chronologies

    Science.gov (United States)

    Bronk Ramsey, Christopher; Albert, Paul; Kearney, Rebecca; Staff, Richard A.

    2016-04-01

    In order to integrate ice, terrestrial and marine records, it is necessary to deal with records on different timescales. These timescales can be grouped into those that use a common fundamental chronometer (such as Uranium-Thorium dating or Radiocarbon) and can also be related to one another where we have chronological tie points such as tephra horizons. More generally we can, through a number of different methodologies, derive relationships between different timescales. A good example of this is the use of cosmogenic isotope production, specifically 10Be and 14C to relate the calibrated radiocarbon timescale to that of the Greenland ice cores. The relationships between different timescales can be mathematically expressed in terms of time-transfer functions. This formalism allows any related record to be considered against any linked timescale with an appropriate associated uncertainty. The prototype INTIMATE chronological database allows records to be viewed and compared in this way and this is now being further developed, both to include a wider range of records and also to provide better connectivity to other databases and chronological tools. These developments will also include new ways to use tephra tie-points to constrain the relationship between timescales directly, without needing to remodel each associated timescale. The database as it stands allows data for particular timeframes to be recalled and plotted against any timescale, or exported in spreadsheet format. New functionality will be added to allow users to work with their own data in a private space and then to publish it when it has been through the peer-review publication process. In order to make the data easier to use for other further analysis and plotting, and with data from other sources, the database will also act as a server to deliver data in a JSON format. The aim of this work is to make the comparison of integrated data much easier for researchers and to ensure that good practice in

  4. The Jungle Database Search Engine

    DEFF Research Database (Denmark)

    Bøhlen, Michael Hanspeter; Bukauskas, Linas; Dyreson, Curtis

    1999-01-01

    Information spread in in databases cannot be found by current search engines. A database search engine is capable to access and advertise database on the WWW. Jungle is a database search engine prototype developed at Aalborg University. Operating through JDBC connections to remote databases, Jungle...

  5. PHYTOSOCIOLOGICAL DATABASE OF SLOVAK GRASSLAND VEGETATION

    Directory of Open Access Journals (Sweden)

    M. JANISOVA

    2007-04-01

    Full Text Available In Slovakia, the Central Phytosociological Database has been built since 1996 and it is located in the Institute of Botany, Slovak Academy of Sciences, Bratislava. Since 2005, we focused on the collection of phytosociological relevés from semi-natural grassland communities belonging to phytosociological classes Molinio-Arrhenatheretea, Festuco-Brometea and Nardetea strictae. All accessible published relevés were compiled and stored in the Turboveg program. Since 1990 an extensive field survey was caried out with the aim to record the actual stage of semi-natural grasslands in Slovakia after the period of profound land-use changes (collectivisation, abandonment, succession. As a result of this survey, 4988 of recent unpublished relevés were stored in our database. Alltogether, the database of grassland vegetation contains 11 121 relevés, collected by 143 authors between 1924 and 2006. These relevés include 387 765 vascular plants individual records nad 6 439 records of bryophyte and lichen species. The basic statistical information on this database is presented in the paper and the quality of the data is discussed. The possible application of such phytosociological dataset is outlined.

  6. PHYTOSOCIOLOGICAL DATABASE OF SLOVAK GRASSLAND VEGETATION

    Directory of Open Access Journals (Sweden)

    I SKODOVA

    2007-01-01

    Full Text Available In Slovakia, the Central Phytosociological Database has been built since 1996 and it is located in the Institute of Botany, Slovak Academy of Sciences, Bratislava. Since 2005, we focused on the collection of phytosociological relevés from semi-natural grassland communities belonging to phytosociological classes Molinio-Arrhenatheretea, Festuco-Brometea and Nardetea strictae. All accessible published relevés were compiled and stored in the Turboveg program. Since 1990 an extensive field survey was caried out with the aim to record the actual stage of semi-natural grasslands in Slovakia after the period of profound land-use changes (collectivisation, abandonment, succession. As a result of this survey, 4988 of recent unpublished relevés were stored in our database. Alltogether, the database of grassland vegetation contains 11 121 relevés, collected by 143 authors between 1924 and 2006. These relevés include 387 765 vascular plants individual records nad 6 439 records of bryophyte and lichen species. The basic statistical information on this database is presented in the paper and the quality of the data is discussed. The possible application of such phytosociological dataset is outlined.

  7. Neutrosophic Relational Database Decomposition

    Directory of Open Access Journals (Sweden)

    Meena Arora

    2011-08-01

    Full Text Available In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.

  8. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  9. Querying genomic databases

    Energy Technology Data Exchange (ETDEWEB)

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  10. Fashion Information Database

    Institute of Scientific and Technical Information of China (English)

    LI Jun; WU Hai-yan; WANG Yun-yi

    2002-01-01

    In the field of fashion industry, it is a bottleneck of how to control and apply the information in the procedure of fashion merchandising. By the aid of digital technology,a perfect and practical fashion information database could be established so that high- quality and efficient,low-cost and characteristic fashion merchandising system could be realized. The basic structure of fashion information database is discussed.

  11. The Gun Violence Database

    OpenAIRE

    Pavlick, Ellie; Callison-Burch, Chris

    2016-01-01

    We describe the Gun Violence Database (GVDB), a large and growing database of gun violence incidents in the United States. The GVDB is built from the detailed information found in local news reports about gun violence, and is constructed via a large-scale crowdsourced annotation effort through our web site, http://gun-violence.org/. We argue that centralized and publicly available data about gun violence can facilitate scientific, fact-based discussion about a topic that is often dominated by...

  12. Database computing in HEP

    Science.gov (United States)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  13. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  14. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  15. The world bacterial biogeography and biodiversity through databases: a case study of NCBI Nucleotide Database and GBIF Database.

    Science.gov (United States)

    Selama, Okba; James, Phillip; Nateche, Farida; Wellington, Elizabeth M H; Hacène, Hocine

    2013-01-01

    Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record). These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  16. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  17. Record dynamics

    DEFF Research Database (Denmark)

    Robe, Dominic M.; Boettcher, Stefan; Sibani, Paolo

    2016-01-01

    -facto irreversible and become increasingly harder to achieve. Thus, a progression of record-sized dynamical barriers are traversed in the approach to equilibration. Accordingly, the statistics of the events is closely described by a log-Poisson process. Originally developed for relaxation in spin glasses...

  18. The Use of AJAX in Searching a Bibliographic Database: A Case Study of the Italian Biblioteche Oggi Database

    Science.gov (United States)

    Cavaleri, Piero

    2008-01-01

    Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks. Findings:…

  19. Bluetooth wireless database for scoliosis clinics.

    Science.gov (United States)

    Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K

    2003-05-01

    A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.

  20. The Danish Collaborative Bacteraemia Network (DACOBAN) database.

    Science.gov (United States)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus; Knudsen, Jenny Dahl; Ostergaard, Christian; Søgaard, Mette

    2014-01-01

    The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32% of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In addition, the high number of patients facilitates studies of rare microorganisms. Thus far, studies on Staphylococcus aureus, enterococci, computer algorithms for the classification of bacteremic episodes, and prognosis and risk in relation to socioeconomic factors have been published.

  1. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...Yeast Interacting Proteins Database Update History of This Database Date Update contents 2010/03/29 Yeast In...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History

  2. CDKD: a clinical database of kidney diseases

    Directory of Open Access Journals (Sweden)

    Singh Sanjay

    2012-04-01

    Full Text Available Abstract Background The main function of the kidneys is to remove waste products and excess water from the blood. Loss of kidney function leads to various health issues, such as anemia, high blood pressure, bone disease, disorders of cholesterol. The main objective of this database system is to store the personal and laboratory investigatory details of patients with kidney disease. The emphasis is on experimental results relevant to quantitative renal physiology, with a particular focus on data relevant for evaluation of parameters in statistical models of renal function. Description Clinical database of kidney diseases (CDKD has been developed with patient confidentiality and data security as a top priority. It can make comparative analysis of one or more parameters of patient’s record and includes the information of about whole range of data including demographics, medical history, laboratory test results, vital signs, personal statistics like age and weight. Conclusions The goal of this database is to make kidney-related physiological data easily available to the scientific community and to maintain & retain patient’s record. As a Web based application it permits physician to see, edit and annotate a patient record from anywhere and anytime while maintaining the confidentiality of the personal record. It also allows statistical analysis of all data.

  3. ATLAS Recordings

    CERN Multimedia

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you.Enjoy the Lectures!

  4. Ontological interpretation of biomedical database content.

    Science.gov (United States)

    Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan

    2017-06-26

    Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is

  5. Databases of surface wave dispersion

    Directory of Open Access Journals (Sweden)

    L. Boschi

    2005-06-01

    Full Text Available Observations of seismic surface waves provide the most important constraint on the elastic properties of the Earth’s lithosphere and upper mantle. Two databases of fundamental mode surface wave dispersion were recently compiled and published by groups at Harvard (Ekström et al., 1997 and Utrecht/Oxford (Trampert and Woodhouse, 1995, 2001, and later employed in 3-d global tomographic studies. Although based on similar sets of seismic records, the two databases show some significant discrepancies. We derive phase velocity maps from both, and compare them to quantify the discrepancies and assess the relative quality of the data; in this endeavour, we take careful account of the effects of regularization and parametrization. At short periods, where Love waves are mostly sensitive to crustal structure and thickness, we refer our comparison to a map of the Earth’s crust derived from independent data. On the assumption that second-order effects like seismic anisotropy and scattering can be neglected, we find the measurements of Ekström et al. (1997 of better quality; those of Trampert and Woodhouse (2001 result in phase velocity maps of much higher spatial frequency and, accordingly, more difficult to explain and justify geophysically. The discrepancy is partly explained by the more conservative a priori selection of data implemented by Ekström et al. (1997. Nevertheless, it becomes more significant with decreasing period, which indicates that it could also be traced to the different measurement techniques employed by the authors.

  6. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  7. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  8. Danish clinical databases: An overview

    DEFF Research Database (Denmark)

    Green, Anders

    2011-01-01

    Clinical databases contain data related to diagnostic procedures, treatments and outcomes. In 2001, a scheme was introduced for the approval, supervision and support to clinical databases in Denmark....

  9. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  10. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The datab......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...... in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  11. FishTraits Database

    Science.gov (United States)

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  12. The Danish Depression Database

    Directory of Open Access Journals (Sweden)

    Videbech P

    2016-10-01

    Full Text Available Poul Videbech,1 Anette Deleuran2 1Mental Health Centre Glostrup, Department of Clinical Medicine, University of Copenhagen, Glostrup, 2Psychiatric Centre Amager, Copenhagen S, Denmark Aim of database: The purpose of the Danish Depression Database (DDD is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. Study population: Inpatients as well as outpatients with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. Main variables: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well as an evaluation of the risk of suicide are measured before and after treatment. Whether psychiatric aftercare has been scheduled for inpatients and the rate of rehospitalization are also registered. Descriptive data: The database was launched in 2011. Every year since then ~5,500 inpatients and 7,500 outpatients have been registered annually in the database. A total of 24,083 inpatients and 29,918 outpatients have been registered. The DDD produces an annual report published on the Internet. Conclusion: The DDD can become an important tool for quality improvement and research, when the reporting is more complete. Keywords: quality assurance, suicide, somatic diseases, national database

  13. The Chandra Bibliography Database

    Science.gov (United States)

    Rots, A. H.; Winkelman, S. L.; Paltani, S.; Blecksmith, S. E.; Bright, J. D.

    2004-07-01

    Early in the mission, the Chandra Data Archive started the development of a bibliography database, tracking publications in refereed journals and on-line conference proceedings that are based on Chandra observations, allowing our users to link directly to articles in the ADS from our archive, and to link to the relevant data in the archive from the ADS entries. Subsequently, we have been working closely with the ADS and other data centers, in the context of the ADEC-ITWG, on standardizing the literature-data linking. We have also extended our bibliography database to include all Chandra-related articles and we are also keeping track of the number of citations of each paper. Obviously, in addition to providing valuable services to our users, this database allows us to extract a wide variety of statistical information. The project comprises five components: the bibliography database-proper, a maintenance database, an interactive maintenance tool, a user browsing interface, and a web services component for exchanging information with the ADS. All of these elements are nearly mission-independent and we intend make the package as a whole available for use by other data centers. The capabilities thus provided represent support for an essential component of the Virtual Observatory.

  14. 76 FR 76215 - Privacy Act; System of Records: State-78, Risk Analysis and Management Records

    Science.gov (United States)

    2011-12-06

    ... investigation records, investigatory material for law enforcement purposes, and confidential source information... Unclassified computer network. Vetting requests, analyses, and results will be stored separately on a classified computer network. Both computer networks and the RAM database require a user identification...

  15. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  16. DEPOT database: Reference manual and user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information.

  17. [The MEDAN database: patients with abdominal septic shock].

    Science.gov (United States)

    Paetz, J; Erz, K; Arlt, B; Hanisch, E

    2003-04-01

    Septic shock still has an unacceptable high mortality rate. To lowering this high mortality rate in the long run, we built a database that is unique in its data amount. Until now we have transferred 282 handwritten patient records into our database. Data were collected retrospectively from 1997 to 2001, based on voluntary cooperation of 62 hospitals. With the preprocessed data of our database we give mainly an epidemiologic overview and make first statistical evaluations. Thereby we noticed that some diagnoses and operations appear significantly higher for deceased patients than for survivors. At the end we discuss the future potential of the database.

  18. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  19. The LHCb configuration database

    CERN Document Server

    Abadie, Lana; Gaspar, Clara; Jacobsson, Richard; Jost, Beat; Neufeld, Niko

    2005-01-01

    The Experiment Control System (ECS) will handle the monitoring, configuration and operation of all the LHCb experimental equipment. All parameters required to configure electronics equipment under the control of the ECS will reside in a configuration database. The database will contain two kinds of information: 1.\tConfiguration properties about devices such as hardware addresses, geographical location, and operational parameters associated with particular running modes (dynamic properties). 2.\tConnectivity between devices : this consists of describing the output and input connections of a device (static properties). The representation of these data using tables must be complete so that it can provide all the required information to the ECS and must cater for all the subsystems. The design should also guarantee a fast response time, even if a query results in a large volume of data being loaded from the database into the ECS. To fulfil these constraints, we apply the following methodology: Determine from the d...

  20. Mouse genome database 2016.

    Science.gov (United States)

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  1. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  2. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  3. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva;

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive...... melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register. MAIN VARIABLES: The main variables include demographic, clinical, and pathological characteristics, including Breslow's tumor thickness, ± ulceration, mitoses, and tumor...... quality register. The coverage is high, and the performance in the five Danish regions is quite similar due to strong adherence to guidelines provided by the Danish Melanoma Group. The list of monitored indicators is constantly expanding, and annual quality reports are issued. Several important scientific...

  4. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  5. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  6. Harmonization of Databases

    DEFF Research Database (Denmark)

    Charlifue, Susan; Tate, Denise; Biering-Sorensen, Fin

    2016-01-01

    The objectives of this article are to (1) provide an overview of existing spinal cord injury (SCI) clinical research databases-their purposes, characteristics, and accessibility to users; and (2) present a vision for future collaborations required for cross-cutting research in SCI. This vision...... strengths and weaknesses. Efforts to provide a uniform approach to data collection are also reviewed. The databases reviewed offer different approaches to capture important clinical information on SCI. They vary on size, purpose, data points, inclusion of standard outcomes, and technical requirements. Each...... highlights the need for validated and relevant data for longitudinal clinical trials and observational and epidemiologic SCI-related studies. Three existing SCI clinical research databases/registries are reviewed and summarized with regard to current formats, collection methods, and uses, including major...

  7. Medical database security evaluation.

    Science.gov (United States)

    Pangalos, G J

    1993-01-01

    Users of medical information systems need confidence in the security of the system they are using. They also need a method to evaluate and compare its security capabilities. Every system has its own requirements for maintaining confidentiality, integrity and availability. In order to meet these requirements a number of security functions must be specified covering areas such as access control, auditing, error recovery, etc. Appropriate confidence in these functions is also required. The 'trust' in trusted computer systems rests on their ability to prove that their secure mechanisms work as advertised and cannot be disabled or diverted. The general framework and requirements for medical database security and a number of parameters of the evaluation problem are presented and discussed. The problem of database security evaluation is then discussed, and a number of specific proposals are presented, based on a number of existing medical database security systems.

  8. Using the ENF Criterion for Determining the Time of Recording of Short Digital Audio Recordings

    Science.gov (United States)

    Huijbregtse, Maarten; Geradts, Zeno

    The Electric Network Frequency (ENF) Criterion is a recently developed forensic technique for determining the time of recording of digital audio recordings, by matching the ENF pattern from a questioned recording with an ENF pattern database. In this paper we discuss its inherent limitations in the case of short - i.e., less than 10 minutes in duration - digital audio recordings. We also present a matching procedure based on the correlation coefficient, as a more robust alternative to squared error matching.

  9. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  10. Atlas of Iberian water beetles (ESACIB database).

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A; Ribera, Ignacio

    2015-01-01

    The ESACIB ('EScarabajos ACuáticos IBéricos') database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the "Atlas de los Coleópteros Acuáticos de España Peninsular". In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format.

  11. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us KOME Database... Description General information of database Database name Knowledge-based Oryza Molecular biol...baraki 305-8602, Japan National Institute of Agrobiological Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database... classification Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database...A clones that were completely sequenced in the Rice full-length cDNA project is shown in the database. The f

  12. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us GETDB Database Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...cription About 4,600 insertion lines of enhancer trap lines based on the Gal4-UAS

  13. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  14. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  15. MARKS ON ART database

    DEFF Research Database (Denmark)

    van Vlierden, Marieke; Wadum, Jørgen; Wolters, Margreet

    2016-01-01

    Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag.......Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag....

  16. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  17. AACR2: OCLC's Implementation and Database

    Directory of Open Access Journals (Sweden)

    Georgia L. Brown

    1981-09-01

    Full Text Available OCLC's Online Union Catalog (OLUC contains bibliographic records created under various cataloging guidelines. Until December 1980, no system-wide attempt had been made to resolve record conflicts caused by use of the different guidelines. The introduction of the new guidelines, the Anglo-American Cataloguing Rules, Second Edition (AACR2, exacerbated these record conflicts. To reduce library costs, which might increase dramatically as users attempted to resolve those conflicts, OCLC converted name headings and uniform titles in its database to AACR2 form.The purpose of the conversion was to resolve record conflicts that resulted from rule changes and to conform to LC preferred forms of heading if possible.

  18. Delaware Bay Database; Delaware Sea Grant College Program, 28 June 1988 (NODC Accession 8900151)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Delaware Bay database contains records of discrete quality observations, collected on 40 oceanographic cruises between May 1978 and October 1985. Each record...

  19. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  20. Accident/Incident Data Analysis Database Summaries. Volume 1

    Science.gov (United States)

    1989-03-01

    provides the record information to the database. Such informatico could provide an insight into the quality of the record information. ] 8. Criteria for...Limitations/Caveats/Biases: In 1984 an automatic error detection system was introduced at all domestic ARTCCs. This is a software program that...groups, including administrative, time, aircraft, location, person, weather, software , conflicts, major classifications, text, and diagnostics. The

  1. Record club

    CERN Multimedia

    Record club

    2010-01-01

      Bonjour a tous, Voici les 24 nouveaux DVD de Juillet disponibles depuis quelques jours, sans oublier les 5 CD Pop musique. Découvrez la saga du terroriste Carlos, la vie de Gainsbourg et les aventures de Lucky Luke; angoissez avec Paranormal Activity et évadez vous sur Pandora dans la peau d’Avatar. Toutes les nouveautés sont à découvrir directement au club. Pour en connaître la liste complète ainsi que le reste de la collection du Record Club, nous vous invitons sur notre site web: http://cern.ch/crc. Toutes les dernières nouveautés sont dans la rubrique « Discs of the Month ». Rappel : le club est ouvert les Lundis, Mercredis, Vendredis de 12h30 à 13h00 au restaurant n°2, bâtiment 504. A bientôt chers Record Clubbers.  

  2. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club November  Selections Just in time for the holiday season, we have added a number of new CDs and DVDs into the Club. You will find the full lists at http://cern.ch/record.club; select the "Discs of the Month" button on the left side on the left panel of the web page and then Nov 2011. New films include the all 5 episodes of Fast and Furious, many of the most famous films starring Jean-Paul Belmondo and those of Louis de Funes and some more recent films such as The Lincoln Lawyer and, according to some critics, Woody Allen’s best film for years – Midnight in Paris. For the younger generation there is Cars 2 and Kung Fu Panda 2. New CDs include the latest releases by Adele, Coldplay and the Red Hot Chili Peppers. We have also added the new Duets II CD featuring Tony Bennett singing with some of today’s pop stars including Lady Gaga, Amy Winehouse and Willy Nelson. The Club is now open every Monday, Wednesday and Friday ...

  3. ATLAS Recordings

    CERN Multimedia

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  4. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club Nouveautés été 2011 Le club de location de CDs et de DVDs vient d’ajouter un grand nombre de disques pour l’été 2011. Parmi eux, Le Discours d’un Roi, oscar 2011 du meilleur film et Harry Potter les reliques de la mort (1re partie). Ce n’est pas moins de 48 DVDs et 10 CDs nouveaux qui vous sont proposés à la location. Il y en a pour tous les genres. Alors n’hésitez pas à consulter notre site http://cern.ch/record.club, voir Disc Catalogue, Discs of the month pour avoir la liste complète. Le club est ouvert tous les Lundi, Mercredi, Vendredi de 12h30 à 13h dans le bâtiment du restaurent N°2 (Cf. URL: http://www.cern.ch/map/building?bno=504) A très bientôt.  

  5. Record Club

    CERN Multimedia

    Record Club

    2011-01-01

    http://cern.ch/Record.Club June Selections We have put a significant number of new CDs and DVDs into the Club You will find the full lists at http://cern.ch/record.club and select the «Discs of the Month» button on the left side on the left panel of the web page and then June 2011. New films include the latest Action, Suspense and Science Fiction film hits, general drama movies including the Oscar-winning The King’s Speech, comedies including both chapter of Bridget Jones’s Diary, seven films for children and a musical. Other highlights include the latest Harry Potter release and some movies from the past you may have missed including the first in the Terminator series. New CDs include the latest releases by Michel Sardou, Mylene Farmer, Jennifer Lopez, Zucchero and Britney Spears. There is also a hits collection from NRJ. Don’t forget that the Club is now open every Monday, Wednesday and Friday lunchtimes from 12h30 to 13h00 in Restaurant 2, Building 504. (C...

  6. Design and development a children's speech database

    OpenAIRE

    Kraleva, Radoslava

    2016-01-01

    The report presents the process of planning, designing and the development of a database of spoken children's speech whose native language is Bulgarian. The proposed model is designed for children between the age of 4 and 6 without speech disorders, and reflects their specific capabilities. At this age most children cannot read, there is no sustained concentration, they are emotional, etc. The aim is to unite all the media information accompanying the recording and processing of spoken speech...

  7. MARC and Relational Databases.

    Science.gov (United States)

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  8. NoSQL Databases

    OpenAIRE

    2014-01-01

    In this document, I present the main notions of NoSQL databases and compare four selected products (Riak, MongoDB, Cassandra, Neo4J) according to their capabilities with respect to consistency, availability, and partition tolerance, as well as performance. I also propose a few criteria for selecting the right tool for the right situation.

  9. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...

  10. Database Programming Languages

    DEFF Research Database (Denmark)

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...

  11. Database for West Africa

    African Journals Online (AJOL)

    Such database can prove an invaluable source of information for a wide range of agricultural and ... national soil classification systems around the world ... West African Journal of Appl ied Ecology, vol. .... SDB FAO-ISRIC English, French, Spanish Morphology and analytical ..... Furthermore, it will enhance the state of soil.

  12. Food composition databases

    Science.gov (United States)

    Food composition is the determination of what is in the foods we eat and is the critical bridge between nutrition, health promotion and disease prevention and food production. Compilation of data into useable databases is essential to the development of dietary guidance for individuals and populat...

  13. The Ribosomal Database Project

    Science.gov (United States)

    Olsen, G. J.; Overbeek, R.; Larsen, N.; Marsh, T. L.; McCaughey, M. J.; Maciukenas, M. A.; Kuan, W. M.; Macke, T. J.; Xing, Y.; Woese, C. R.

    1992-01-01

    The Ribosomal Database Project (RDP) complies ribosomal sequences and related data, and redistributes them in aligned and phylogenetically ordered form to its user community. It also offers various software packages for handling, analyzing and displaying sequences. In addition, the RDP offers (or will offer) certain analytic services. At present the project is in an intermediate stage of development.

  14. Hydrocarbon Spectral Database

    Science.gov (United States)

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  15. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  16. Databases and data mining

    Science.gov (United States)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  17. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  18. DataBase on demand

    CERN Document Server

    Aparicio, Ruben Gaspar; Coterillo Coz, I

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  19. DataBase on Demand

    Science.gov (United States)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  20. Use of administrative medical databases in population-based research.

    Science.gov (United States)

    Gavrielov-Yusim, Natalie; Friger, Michael

    2014-03-01

    Administrative medical databases are massive repositories of data collected in healthcare for various purposes. Such databases are maintained in hospitals, health maintenance organisations and health insurance organisations. Administrative databases may contain medical claims for reimbursement, records of health services, medical procedures, prescriptions, and diagnoses information. It is clear that such systems may provide a valuable variety of clinical and demographic information as well as an on-going process of data collection. In general, information gathering in these databases does not initially presume and is not planned for research purposes. Nonetheless, administrative databases may be used as a robust research tool. In this article, we address the subject of public health research that employs administrative data. We discuss the biases and the limitations of such research, as well as other important epidemiological and biostatistical key points specific to administrative database studies.

  1. A review of drug-induced liver injury databases.

    Science.gov (United States)

    Luo, Guangwen; Shen, Yiting; Yang, Lizhu; Lu, Aiping; Xiang, Zheng

    2017-07-17

    Drug-induced liver injuries have been a major focus of current research in drug development, and are also one of the major reasons for the failure and withdrawal of drugs in development. Drug-induced liver injuries have been systematically recorded in many public databases, which have become valuable resources in this field. In this study, we provide an overview of these databases, including the liver injury-specific databases LiverTox, LTKB, Open TG-GATEs, LTMap and Hepatox, and the general databases, T3DB, DrugBank, DITOP, DART, CTD and HSDB. The features and limitations of these databases are summarized and discussed in detail. Apart from their powerful functions, we believe that these databases can be improved in several ways: by providing the data about the molecular targets involved in liver toxicity, by incorporating information regarding liver injuries caused by drug interactions, and by regularly updating the data.

  2. Hadoop NoSQL database

    OpenAIRE

    2015-01-01

    The theme of this work is database storage Hadoop Hbase. The main goal is to demonstrate the principles of its function and show the main usage. The entire text assumes that the reader is already familiar with the basic principles of NoSQL databases. The theoretical part briefly describes the basic concepts of databases then mostly covers Hadoop and its properties. This work also includes the practical part which describes how to install a database repository and illustrates basic database op...

  3. The Database State Machine Approach

    OpenAIRE

    1999-01-01

    Database replication protocols have historically been built on top of distributed database systems, and have consequently been designed and implemented using distributed transactional mechanisms, such as atomic commitment. We present the Database State Machine approach, a new way to deal with database replication in a cluster of servers. This approach relies on a powerful atomic broadcast primitive to propagate transactions between database servers, and alleviates the need for atomic comm...

  4. RECORD CLUB

    CERN Multimedia

    Record Club

    2010-01-01

    DVD James Bond – Series Complete To all Record Club Members, to start the new year, we have taken advantage of a special offer to add copies of all the James Bond movies to date, from the very first - Dr. No - to the latest - Quantum of Solace. No matter which of the successive 007s you prefer (Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan or Daniel Craig), they are all there. Or perhaps you have a favourite Bond Girl, or even perhaps a favourite villain. Take your pick. You can find the full selection listed on the club web site http://cern.ch/crc; use the panel on the left of the page “Discs of the Month” and select Jan 2010. We remind you that we are open on Mondays, Wednesdays and Fridays from 12:30 to 13:00 in Restaurant 2 (Bldg 504).

  5. Record breakers

    CERN Document Server

    Antonella Del Rosso

    2012-01-01

    In the sixties, CERN’s Fellows were but a handful of about 50 young experimentalists present on site to complete their training. Today, their number has increased to a record-breaking 500. They come from many different fields and are spread across CERN’s different activity areas.   “Diversifying the Fellowship programme has been the key theme in recent years,” comments James Purvis, Head of the Recruitment, Programmes and Monitoring group in the HR Department. “In particular, the 2005 five-yearly review introduced the notion of ‘senior’ and ‘junior’ Fellowships, broadening the target audience to include those with Bachelor-level qualifications.” Diversification made CERN’s Fellowship programme attractive to a wider audience but the number of Fellows on site could not have increased so much without the support of EU-funded projects, which were instrumental in the growth of the programme. ...

  6. Top Cited Scholars in Multicultural Counseling: A Citation Analysis of Journal Articles in PsycINFO

    Science.gov (United States)

    Piotrowski, Chris

    2013-01-01

    The area of multicultural counseling is a sub-field of the counseling profession and research in this specialty has proliferated at a rapid pace over the past 20 years. In order to gauge emergent trends in multicultural counseling, researchers have conducted content analyses of scholarly documents like journals and books. A related methodology…

  7. The Danish Melanoma Database

    Directory of Open Access Journals (Sweden)

    Hölmich Lr

    2016-10-01

    Full Text Available Lisbet Rosenkrantz Hölmich,1 Siri Klausen,2 Eva Spaun,3 Grethe Schmidt,4 Dorte Gad,5 Inge Marie Svane,6,7 Henrik Schmidt,8 Henrik Frank Lorentzen,9 Else Helene Ibfelt10 1Department of Plastic Surgery, 2Department of Pathology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 3Institute of Pathology, Aarhus University Hospital, Aarhus, 4Department of Plastic and Reconstructive Surgery, Breast Surgery and Burns, Rigshospitalet – Glostrup, University of Copenhagen, Copenhagen, 5Department of Plastic Surgery, Odense University Hospital, Odense, 6Center for Cancer Immune Therapy, Department of Hematology, 7Department of Oncology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 8Department of Oncology, 9Department of Dermatology, Aarhus University Hospital, Aarhus, 10Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup – Rigshospitalet, University of Copenhagen, Glostrup, Denmark Aim of database: The aim of the database is to monitor and improve the treatment and survival of melanoma patients.Study population: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD. In 2014, 2,525 patients with invasive melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register.Main variables: The main variables include demographic, clinical, and pathological characteristics, including Breslow’s tumor thickness, ± ulceration, mitoses, and tumor–node–metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a, results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence, nature, and treatment hereof is registered. In case of death, the cause and date

  8. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DMPD Database Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects... pathway models of transcriptional regulation and signal transduction in CSML format for dymamic simulation base

  9. Large whale incident database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Large whale stranding, death, ship strike and entanglement incidents are all recorded to monitor the health of each population and track anthropogenic factors that...

  10. Protein Model Database

    Energy Technology Data Exchange (ETDEWEB)

    Fidelis, K; Adzhubej, A; Kryshtafovych, A; Daniluk, P

    2005-02-23

    The phenomenal success of the genome sequencing projects reveals the power of completeness in revolutionizing biological science. Currently it is possible to sequence entire organisms at a time, allowing for a systemic rather than fractional view of their organization and the various genome-encoded functions. There is an international plan to move towards a similar goal in the area of protein structure. This will not be achieved by experiment alone, but rather by a combination of efforts in crystallography, NMR spectroscopy, and computational modeling. Only a small fraction of structures are expected to be identified experimentally, the remainder to be modeled. Presently there is no organized infrastructure to critically evaluate and present these data to the biological community. The goal of the Protein Model Database project is to create such infrastructure, including (1) public database of theoretically derived protein structures; (2) reliable annotation of protein model quality, (3) novel structure analysis tools, and (4) access to the highest quality modeling techniques available.

  11. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    the vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each...... having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  12. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  13. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  14. The DIPPR® databases

    Science.gov (United States)

    Thomson, G. H.

    1996-01-01

    The Design Institute for Physical Property Data® (DIPPR), one of the Sponsored Research groups of the American Institute of Chemical Engineers (AIChE), has been in existence for 15 years and has supported a total of 14 projects, some completed, some ongoing. Four of these projects are “database” projects for which the primary product is a database of carefully evaluated property data. These projects are Data Compilation; Evaluated Data on Mixtures; Environmental, Safety, and Health Data Compilation; and Difusivities and Thermal Properties of Polymer Solutions. This paper lists the existing DIPPR projects; discusses DIPPR's structure and modes of dissemination of results; describes DIPPR's supporters and its unique characteristics; and finally, discusses the origin, nature, and content of the four database projects.

  15. What is a lexicographical database?

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Skovgård Nielsen, Jesper

    2013-01-01

    project. Such cooperation will reach the highest level of success if the lexicographer has at least a basic knowledge of the topic presented in this paper: What is a database? This type of knowledge is also needed when the lexicographer describes an ongoing or a finished project. In this article, we......50 years ago, no lexicographer used a database in the work process. Today, almost all dictionary projects incorporate databases. In our opinion, the optimal lexicographical database should be planned in cooperation between a lexicographer and a database specialist in each specific lexicographic...... provide the description of this type of cooperation, using the most important theoretical terms relevant in the planning of a database. It will be made clear that a lexicographical database is like any other database. The only difference is that an optimal lexicographical database is constructed to fulfil...

  16. Database for earthquake strong motion studies in Italy

    Science.gov (United States)

    Scasserra, G.; Stewart, J.P.; Kayen, R.E.; Lanzo, G.

    2009-01-01

    We describe an Italian database of strong ground motion recordings and databanks delineating conditions at the instrument sites and characteristics of the seismic sources. The strong motion database consists of 247 corrected recordings from 89 earthquakes and 101 recording stations. Uncorrected recordings were drawn from public web sites and processed on a record-by-record basis using a procedure utilized in the Next-Generation Attenuation (NGA) project to remove instrument resonances, minimize noise effects through low- and high-pass filtering, and baseline correction. The number of available uncorrected recordings was reduced by 52% (mostly because of s-triggers) to arrive at the 247 recordings in the database. The site databank includes for every recording site the surface geology, a measurement or estimate of average shear wave velocity in the upper 30 m (Vs30), and information on instrument housing. Of the 89 sites, 39 have on-site velocity measurements (17 of which were performed as part of this study using SASW techniques). For remaining sites, we estimate Vs30 based on measurements on similar geologic conditions where available. Where no local velocity measurements are available, correlations with surface geology are used. Source parameters are drawn from databanks maintained (and recently updated) by Istituto Nazionale di Geofisica e Vulcanologia and include hypocenter location and magnitude for small events (M< ??? 5.5) and finite source parameters for larger events. ?? 2009 A.S. Elnashai & N.N. Ambraseys.

  17. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  18. Mathematical Foundations of Databases

    Science.gov (United States)

    1991-01-15

    34Spreadsheet Histories , Object- Histories , and Projection Simulation." ICDT 󈨜 2nd International Conference on Database Theory Bruges , Belgium, August...dissertation. The first topic, "Properties of Spreadsheet Histories ", formalized the use of spreadsheets for modelling the history of accounting-like...describing in more detail the results obtained. The first report, "Properties of Spreadsheet Histories ", is by Stephen Kurtzman. In this report, some

  19. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  20. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  2. Modeling Digital Video Database

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.

  3. Record Club

    CERN Multimedia

    Record Club

    2012-01-01

      March  Selections By the time this appears, we will have added a number of new CDs and DVDs into the Club. You will find the full lists at http://cern.ch/record.club; select the "Discs of the Month" button on the left panel of the web page and then Mar 2012. New films include recent releases such as Johnny English 2, Bad Teacher, Cowboys vs Aliens, and Super 8. We are also starting to acquire some of the classic films we missed when we initiated the DVD section of the club, such as appeared in a recent Best 100 Films published by a leading UK magazine; this month we have added Spielberg’s Jaws and Scorsese’s Goodfellas. If you have your own ideas on what we are missing, let us know. For children we have no less than 8 Tin-Tin DVDs. And if you like fast moving pop music, try the Beyonce concert DVD. New CDs include the latest releases from Paul McCartney, Rihanna and Amy Winehouse. There is a best of Mylene Farmer, a compilation from the NRJ 201...

  4. The GLENDAMA Database

    CERN Document Server

    Goicoechea, Luis J; Gil-Merino, Rodrigo

    2015-01-01

    This is the first version (v1) of the Gravitational LENses and DArk MAtter (GLENDAMA) database accessible at http://grupos.unican.es/glendama/database. The new database contains more than 6000 ready-to-use (processed) astronomical frames corresponding to 15 objects that fall into three classes: (1) lensed QSO (8 objects), (2) binary QSO (3 objects), and (3) accretion-dominated radio-loud QSO (4 objects). Data are also divided into two categories: freely available and available upon request. The second category includes observations related to our yet unpublished analyses. Although this v1 of the GLENDAMA archive incorporates an X-ray monitoring campaign for a lensed QSO in 2010, the rest of frames (imaging, polarimetry and spectroscopy) were taken with NUV, visible and NIR facilities over the period 1999$-$2014. The monitorings and follow-up observations of lensed QSOs are key tools for discussing the accretion flow in distant QSOs, the redshift and structure of intervening (lensing) galaxies, and the physica...

  5. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  6. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. (Calm (James M.), Great Falls, VA (United States))

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  7. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  8. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  9. Private and Efficient Query Processing on Outsourced Genomic Databases.

    Science.gov (United States)

    Ghasemi, Reza; Al Aziz, Md Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian

    2017-09-01

    Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time consuming and expensive process. Second, it requires large-scale computation and storage systems to process genomic sequences. Third, genomic databases are often owned by different organizations, and thus, not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 Single Nucleotide Polymorphisms (SNPs) in a database of 20 000 records takes around 100 and 150 s, respectively.

  10. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] BLAST Search Image Search Home About Archive Update History Contact us ...earch and download Downlaod via FTP Joomla SEF URLs by Artio About This Database Database Description Download License Update History

  11. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...

  12. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  13. Surgical research using national databases.

    Science.gov (United States)

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  14. The Danish Fetal Medicine database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  15. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  16. Freshwater Biological Traits Database (Final Report)

    Science.gov (United States)

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  17. EPICS Input Output Controller (IOC) Record Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, J.B.; Kraimer, M.R.

    1994-12-01

    This manual describes all supported EPICS record types. The first chapter gives introduction and describes the field summary table. The second chapter describes the fields in database common, i.e. the fields that are present in every record type. The third chapter describes the input and output field that are common to many record types and have the same usage wherever they are used. Following the third chapter is a separate chapter for each record type containing a description of all the fields for that record type except those in database common.

  18. A global database of soil respiration data

    Science.gov (United States)

    Bond-Lamberty, B.; Thomson, A.

    2010-06-01

    Soil respiration - RS, the flux of CO2 from the soil to the atmosphere - is probably the least well constrained component of the terrestrial carbon cycle. Here we introduce the SRDB database, a near-universal compendium of published RS data, and make it available to the scientific community both as a traditional static archive and as a dynamic community database that may be updated over time by interested users. The database encompasses all published studies that report one of the following data measured in the field (not laboratory): annual RS, mean seasonal RS, a seasonal or annual partitioning of RS into its sources fluxes, RS temperature response (Q10), or RS at 10 °C. Its orientation is thus to seasonal and annual fluxes, not shorter-term or chamber-specific measurements. To date, data from 818 studies have been entered into the database, constituting 3379 records. The data span the measurement years 1961-2007 and are dominated by temperate, well-drained forests. We briefly examine some aspects of the SRDB data - its climate space coverage, mean annual RS fluxes and their correlation with other carbon fluxes, RS variability, temperature sensitivities, and the partitioning of RS source flux - and suggest some potential lines of research that could be explored using these data. The SRDB database is available online in a permanent archive as well as via a project-hosting repository; the latter source leverages open-source software technologies to encourage wider participation in the database's future development. Ultimately, we hope that the updating of, and corrections to, the SRDB will become a shared project, managed by the users of these data in the scientific community.

  19. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  20. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  1. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  2. Shark Mark Recapture Database (MRDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Shark Mark Recapture Database is a Cooperative Research Program database system used to keep multispecies mark-recapture information in a common format for...

  3. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model, Delaunay triangulation

  4. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  5. Evaluating the quality of Marfan genotype-phenotype correlations in existing FBN1 databases.

    Science.gov (United States)

    Groth, Kristian A; Von Kodolitsch, Yskert; Kutsche, Kerstin; Gaustadnes, Mette; Thorsen, Kasper; Andersen, Niels H; Gravholt, Claus H

    2017-07-01

    Genetic FBN1 testing is pivotal for confirming the clinical diagnosis of Marfan syndrome. In an effort to evaluate variant causality, FBN1 databases are often used. We evaluated the current databases regarding FBN1 variants and validated associated phenotype records with a new Marfan syndrome geno-phenotyping tool called the Marfan score. We evaluated four databases (UMD-FBN1, ClinVar, the Human Gene Mutation Database (HGMD), and Uniprot) containing 2,250 FBN1 variants supported by 4,904 records presented in 307 references. The Marfan score calculated for phenotype data from the records quantified variant associations with Marfan syndrome phenotype. We calculated a Marfan score for 1,283 variants, of which we confirmed the database diagnosis of Marfan syndrome in 77.1%. This represented only 35.8% of the total registered variants; 18.5-33.3% (UMD-FBN1 versus HGMD) of variants associated with Marfan syndrome in the databases could not be confirmed by the recorded phenotype. FBN1 databases can be imprecise and incomplete. Data should be used with caution when evaluating FBN1 variants. At present, the UMD-FBN1 database seems to be the biggest and best curated; therefore, it is the most comprehensive database. However, the need for better genotype-phenotype curated databases is evident, and we hereby present such a database.Genet Med advance online publication 01 December 2016.

  6. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  7. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  8. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  9. Maize microarray annotation database

    Directory of Open Access Journals (Sweden)

    Berger Dave K

    2011-10-01

    Full Text Available Abstract Background Microarray technology has matured over the past fifteen years into a cost-effective solution with established data analysis protocols for global gene expression profiling. The Agilent-016047 maize 44 K microarray was custom-designed from EST sequences, but only reporter sequences with EST accession numbers are publicly available. The following information is lacking: (a reporter - gene model match, (b number of reporters per gene model, (c potential for cross hybridization, (d sense/antisense orientation of reporters, (e position of reporter on B73 genome sequence (for eQTL studies, and (f functional annotations of genes represented by reporters. To address this, we developed a strategy to annotate the Agilent-016047 maize microarray, and built a publicly accessible annotation database. Description Genomic annotation of the 42,034 reporters on the Agilent-016047 maize microarray was based on BLASTN results of the 60-mer reporter sequences and their corresponding ESTs against the maize B73 RefGen v2 "Working Gene Set" (WGS predicted transcripts and the genome sequence. The agreement between the EST, WGS transcript and gDNA BLASTN results were used to assign the reporters into six genomic annotation groups. These annotation groups were: (i "annotation by sense gene model" (23,668 reporters, (ii "annotation by antisense gene model" (4,330; (iii "annotation by gDNA" without a WGS transcript hit (1,549; (iv "annotation by EST", in which case the EST from which the reporter was designed, but not the reporter itself, has a WGS transcript hit (3,390; (v "ambiguous annotation" (2,608; and (vi "inconclusive annotation" (6,489. Functional annotations of reporters were obtained by BLASTX and Blast2GO analysis of corresponding WGS transcripts against GenBank. The annotations are available in the Maize Microarray Annotation Database http://MaizeArrayAnnot.bi.up.ac.za/, as well as through a GBrowse annotation file that can be uploaded to

  10. Final Results of Shuttle MMOD Impact Database

    Science.gov (United States)

    Hyde, J. L.; Christiansen, E. L.; Lear, D. M.

    2015-01-01

    The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.

  11. The Danish Intensive Care Database

    DEFF Research Database (Denmark)

    Christiansen, Christian Fynbo; Møller, Morten Hylander; Nielsen, Henrik

    2016-01-01

    AIM OF DATABASE: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs) by monitoring key domains of intensive care and to compare these with predefined standards. STUDY POPULATION: The Danish Intensive Care Database (DID) was established in 2007...

  12. Choosing among the physician databases.

    Science.gov (United States)

    Heller, R H

    1988-04-01

    Prudent examination and knowing how to ask the "right questions" can enable hospital marketers and planners to find the most accurate and appropriate database. The author compares the comprehensive AMA physician database with the less expensive MEDEC database to determine their strengths and weaknesses.

  13. Clinical databases in physical therapy.

    NARCIS (Netherlands)

    Swinkels, I.C.; Ende, C.H.M. van den; Bakker, D. de; Wees, P.J. van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H.M. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists r

  14. Clinical databases in physical therapy.

    NARCIS (Netherlands)

    Swinkels, I.C.; Ende, C.H.M. van den; Bakker, D. de; Wees, P.J. van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H.M. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists

  15. Mining Safety Signals in Spontaneous Report Database using Concept Analysis

    OpenAIRE

    Rouane Hacene, Amine Mohamed; Toussaint, Yannick; Valtchev, Petko

    2009-01-01

    International audience; In pharmacovigilance, linking the adverse reactions by patients to drugs they took is a key activity typically based on the analysis of patient reports. Yet generating potentially interesting pairs (drug, reaction) from a record database is a complex task, especially when many drugs are involved. To limit the generation effort, we exploit the frequently occurring patterns in the database and form \\textit{association rules} on top of them. Moreover, only rules of minima...

  16. The Future of Medical Diagnostics: Large Digitized Databases

    OpenAIRE

    Kerr, Wesley T.; Lau, Edward P.; Owens, Gwen E.; Trefler, Aaron

    2012-01-01

    The electronic health record mandate within the American Recovery and Reinvestment Act of 2009 will have a far-reaching affect on medicine. In this article, we provide an in-depth analysis of how this mandate is expected to stimulate the production of large-scale, digitized databases of patient information. There is evidence to suggest that millions of patients and the National Institutes of Health will fully support the mining of such databases to better understand the process of diagnosing ...

  17. 77 FR 62059 - Privacy Act of 1974, as Amended; Revisions to Existing Systems of Records

    Science.gov (United States)

    2012-10-11

    ... RECORDS IN THE SYSTEM: Records include employee's name, Social Security number, date of birth, home address, home telephone number, and specialized education. Records reflect Federal service and work... Enforcement Actions NTSB-14 Information Request Database NTSB-15 Local Area Network Database NTSB-16...

  18. Genomic Database Searching.

    Science.gov (United States)

    Hutchins, James R A

    2017-01-01

    The availability of reference genome sequences for virtually all species under active research has revolutionized biology. Analyses of genomic variations in many organisms have provided insights into phenotypic traits, evolution and disease, and are transforming medicine. All genomic data from publicly funded projects are freely available in Internet-based databases, for download or searching via genome browsers such as Ensembl, Vega, NCBI's Map Viewer, and the UCSC Genome Browser. These online tools generate interactive graphical outputs of relevant chromosomal regions, showing genes, transcripts, and other genomic landmarks, and epigenetic features mapped by projects such as ENCODE.This chapter provides a broad overview of the major genomic databases and browsers, and describes various approaches and the latest resources for searching them. Methods are provided for identifying genomic locus and sequence information using gene names or codes, identifiers for DNA and RNA molecules and proteins; also from karyotype bands, chromosomal coordinates, sequences, motifs, and matrix-based patterns. Approaches are also described for batch retrieval of genomic information, performing more complex queries, and analyzing larger sets of experimental data, for example from next-generation sequencing projects.

  19. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  20. Object-relational database infrastructure for interactive multimedia service

    Science.gov (United States)

    Hu, Michael J.; Chunyan, Miao

    1997-10-01

    Modern interactive multimedia services, such as the video-on-demand, electronic library, and etc. tend to involve large-scale media archives of audio records, video clips, image banks, and text documents. Thus, these services impose many challenges on designing and implementing new generation database systems. In this paper, we first introduce a new multimedia data model, which could accommodate sophisticated media types, as well as complex relationships among different media entities. Thereafter, an object-relationship media types, as well as complex relationships among different media entities. Thereafter, an object-relational database infrastructure is proposed, to support applications of the data model developed in our project. The infrastructure is designated both as a framework for designing and implementing multimedia databases, and as a reference model to compare and evaluate different database systems. Features of the proposed infrastructure, as well as its implementation into a prototype multimedia database system, are also discussed in the paper.

  1. Management of dam safety at BC Hydro: the database tool

    Energy Technology Data Exchange (ETDEWEB)

    Oswell, Terry [BC Hydrom Burnaby, (Canada)

    2010-07-01

    BC Hydro has a wide range of dams, which raises a wide range of issues at many unique sites. A dam safety database was developed in 2000 to deal with the complexity and volume of information provided by deficiency investigations and surveillances. The database contains all documented deficiencies and non-conformances identified in the past 10 years. It records the risk ratings assigned to each issue. This paper described the implementation of the database tool, from the characterization of a dam safety issue to the use of the database itself. The dam safety database is now a key tool in managing the dam safety program at BC Hydro and has been useful for the last 10 years or so in prioritizing the program of deficiency investigations and capital projects. The development of a process to rate non-conformances is currently under study and will be implemented soon to aid in more efficient prioritization of maintenance activities.

  2. Evidence generation from healthcare databases: recommendations for managing change.

    Science.gov (United States)

    Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C

    2016-07-01

    There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Federated Spatial Databases and Interoperability

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is a period of information explosion. Especially for spatialinfo rmation science, information can be acquired through many ways, such as man-mad e planet, aeroplane, laser, digital photogrammetry and so on. Spatial data source s are usually distributed and heterogeneous. Federated database is the best reso lution for the share and interoperation of spatial database. In this paper, the concepts of federated database and interoperability are introduced. Three hetero geneous kinds of spatial data, vector, image and DEM are used to create integrat ed database. A data model of federated spatial databases is given

  4. Genomic Databases for Crop Improvement

    Directory of Open Access Journals (Sweden)

    David Edwards

    2012-03-01

    Full Text Available Genomics is playing an increasing role in plant breeding and this is accelerating with the rapid advances in genome technology. Translating the vast abundance of data being produced by genome technologies requires the development of custom bioinformatics tools and advanced databases. These range from large generic databases which hold specific data types for a broad range of species, to carefully integrated and curated databases which act as a resource for the improvement of specific crops. In this review, we outline some of the features of plant genome databases, identify specific resources for the improvement of individual crops and comment on the potential future direction of crop genome databases.

  5. Databases as an information service

    Science.gov (United States)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  6. EEMIS data sector correspondence with conceptual database design

    Energy Technology Data Exchange (ETDEWEB)

    Croteau, K; Kydes, A S; Maier, D

    1979-07-01

    The purpose of this report is fivefold: (1) it provides an introduction to database systems and critieria that are important in the selection of a commercial database management system; (2) it demonstrates that the mapping of the EEMIS data sector structure into the conceptual database model is complete and preserves the hierarchies implied by the EEMIS data structure; (3) it describes all fields and associated field lengths; (4) it provides accurate formulae for estimating the computer storage requirements for the conceptual models at the facility level; and (5) it provides the details of the storage requirements for oil refineries, petroleum crude storage facilities, natural gas transmission and distribution facilities, and company description records.

  7. Efficient Set-Correlation Operator Inside Databases

    Institute of Scientific and Technical Information of China (English)

    Fei Gao; Shao-Xu Song; Lei Chen; Jian-Min Wang

    2016-01-01

    Large scale of short text records are now prevalent, such as news highlights, scientific paper citations, and posted messages in a discussion forum, and are often stored as set records in hidden-Web databases. Many interesting information retrieval tasks are correspondingly raised on the correlation query over these short text records, such as finding hot topics over news highlights and searching related scientific papers on a certain topic. However, current relational database management systems (RDBMS) do not directly provide support on set correlation query. Thus, in this paper, we address both the effectiveness and the efficiency issues of set correlation query over set records in databases. First, we present a framework of set correlation query inside databases. To the best of our knowledge, only the Pearson’s correlation can be implemented to construct token correlations by using RDBMS facilities. Thereby, we propose a novel correlation coefficient to extend Pearson’s correlation, and provide a pure-SQL implementation inside databases. We further propose optimal strategies to set up correlation filtering threshold, which can greatly reduce the query time. Our theoretical analysis proves that with a proper setting of filtering threshold, we can improve the query efficiency with a little effectiveness loss. Finally, we conduct extensive experiments to show the effectiveness and the efficiency of proposed correlation query and optimization strategies.

  8. RPG: the Ribosomal Protein Gene database.

    Science.gov (United States)

    Nakao, Akihiro; Yoshihama, Maki; Kenmochi, Naoya

    2004-01-01

    RPG (http://ribosome.miyazaki-med.ac.jp/) is a new database that provides detailed information about ribosomal protein (RP) genes. It contains data from humans and other organisms, including Drosophila melanogaster, Caenorhabditis elegans, Saccharo myces cerevisiae, Methanococcus jannaschii and Escherichia coli. Users can search the database by gene name and organism. Each record includes sequences (genomic, cDNA and amino acid sequences), intron/exon structures, genomic locations and information about orthologs. In addition, users can view and compare the gene structures of the above organisms and make multiple amino acid sequence alignments. RPG also provides information on small nucleolar RNAs (snoRNAs) that are encoded in the introns of RP genes.

  9. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  10. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database...

  11. A trait database for marine copepods

    Science.gov (United States)

    Brun, Philipp; Payne, Mark R.; Kiørboe, Thomas

    2017-02-01

    The trait-based approach is gaining increasing popularity in marine plankton ecology but the field urgently needs more and easier accessible trait data to advance. We compiled trait information on marine pelagic copepods, a major group of zooplankton, from the published literature and from experts and organized the data into a structured database. We collected 9306 records for 14 functional traits. Particular attention was given to body size, feeding mode, egg size, spawning strategy, respiration rate, and myelination (presence of nerve sheathing). Most records were reported at the species level, but some phylogenetically conserved traits, such as myelination, were reported at higher taxonomic levels, allowing the entire diversity of around 10 800 recognized marine copepod species to be covered with a few records. Aside from myelination, data coverage was highest for spawning strategy and body size, while information was more limited for quantitative traits related to reproduction and physiology. The database may be used to investigate relationships between traits, to produce trait biogeographies, or to inform and validate trait-based marine ecosystem models. The data can be downloaded from PANGAEA, http://dx.doi.org/10.1594/PANGAEA.862968" target="_blank">doi:10.1594/PANGAEA.862968.

  12. Danish Palliative Care Database

    Directory of Open Access Journals (Sweden)

    Groenvold M

    2016-10-01

    Full Text Available Mogens Groenvold,1,2 Mathilde Adsersen,1 Maiken Bang Hansen1 1The Danish Palliative Care Database (DPD Secretariat, Research Unit, Department of Palliative Medicine, Bispebjerg Hospital, 2Department of Public Health, University of Copenhagen, Copenhagen, Denmark Aims: The aim of the Danish Palliative Care Database (DPD is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC (ie, the activity of hospital-based palliative care teams/departments and hospices in Denmark. Study population: The study population is all patients in Denmark referred to and/or in contact with SPC after January 1, 2010. Main variables: The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference, and the patient-reported European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care questionnaire, assessing health-related quality of life. The data support the estimation of currently five quality of care indicators, ie, the proportions of 1 referred and eligible patients who were actually admitted to SPC, 2 patients who waited <10 days before admission to SPC, 3 patients who died from cancer and who obtained contact with SPC, 4 patients who were screened with European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care at admission to SPC, and 5 patients who were discussed at a multidisciplinary conference. Descriptive data: In 2014, all 43 SPC units in Denmark reported their data to DPD, and all 9,434 cancer patients (100% referred to SPC were registered in DPD. In total, 41,104 unique cancer patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data

  13. The VARSUL Database

    Directory of Open Access Journals (Sweden)

    Pereira da Silva Menon, Odete

    2009-01-01

    Full Text Available This study introduces the Project that gave origin to one of the most important databases about oral language in Brazil. The Project on Urban Linguistic Variation in the South of Brazil (VARSUL, that started in 1990, initially comprised the three federal universities of the three States of Southern Brazil: Federal University of Santa Catarina (UFSC, Federal University of Paraná (UFPR and Federal University of Rio Grande do Sul (UFRGS. In 1993, the Project began to also rely on the Pontific Catholic University of Rio Grande do Sul (PUC–RS. The VARSUL Project aims at storing samples of speech realizations by inhabitants of socio-representative urban areas from each of the three states of the South of Brazil, stratified by location, age range, gender and education.

  14. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all......, and the patient-reported European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care questionnaire, assessing health-related quality of life. The data support the estimation of currently five quality of care indicators, ie, the proportions of 1) referred......-Core-15-Palliative Care at admission to SPC, and 5) patients who were discussed at a multidisciplinary conference. Descriptive data: In 2014, all 43 SPC units in Denmark reported their data to DPD, and all 9,434 cancer patients (100%) referred to SPC were registered in DPD. In total, 41,104 unique cancer...

  15. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    Science.gov (United States)

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  16. On-the-job training of health professionals for electronic health record and electronic medical record use: A scoping review

    Directory of Open Access Journals (Sweden)

    Valentina L. Younge

    2015-09-01

    Full Text Available The implementation of electronic health records (EHRs or electronic medical records (EMRs is well documented in health informatics literature yet, very few studies focus primarily on how health professionals in direct clinical care are trained for EHR or EMR use. Purpose: To investigate how health professionals in direct clinical care are trained to prepare them for EHR or EMR use. Methods: Systematic searches were conducted in CINAHL, EMBASE, Ovid MEDLINE, PsycINFO, PubMed and ISI WoS and, the Arksey and O’Malley scoping methodological framework was used to collect the data and analyze the results. Results: Training was done at implementation, orientation and post-implementation. Implementation and orientation training had a broader scope while post-implementation training focused on proficiency, efficiency and improvement. The multiplicity of training methods, types and levels of training identified appear to suggest that training is more effective when a combination of training methods are used.

  17. Dive Data Management System and Database

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, J.

    1998-05-01

    In 1994 the International Marine Contractors Association (IMCA, formerly AODC), the Health and Safety Executive (HSE) and the United Kingdom Offshore Operators Association (UKOOA) entered into a tri-partite Agreement to create a Dive Data Recording and Management System for offshore dives in the air range on the United Kingdom Continental Shelf (UKCS). The two companies of this system were: automatic Dive Data Recording Systems (DDRS) on dive support vessels, to log depth/time and other dive parameters; and a central Dive Data Management System (DDMS) to collate and analyse these data in an industry-wide database. This report summarises the progress of the project over the first two years of operation. It presents the data obtained in the period 1 January 1995 to 31 December 1996, in the form of industry-wide Standard Reports. It comments on the significance of the data, and it records the experience of the participants in implementing and maintaining the offshore Dive Data Recording Systems and the onshore central Dive Data Management System. A key success of the project has been to provide the air-range Diving Supervisor with an accurate, real-time display of the depth and time of every dive. This has enabled the dive and the associated decompression to be managed more effectively by the Supervisor. In the event of an incident, the recorded data are also available to the Dive/Safety Manager, who now has more complete information on which to assess the possible causes of the incident. (author)

  18. International Database of Volcanic Ash Impacts

    Science.gov (United States)

    Wallace, K.; Cameron, C.; Wilson, T. M.; Jenkins, S.; Brown, S.; Leonard, G.; Deligne, N.; Stewart, C.

    2015-12-01

    Volcanic ash creates extensive impacts to people and property, yet we lack a global ash impacts catalog to organize, distribute, and archive this important information. Critical impact information is often stored in ephemeral news articles or other isolated resources, which cannot be queried or located easily. A global ash impacts database would improve 1) warning messages, 2) public and lifeline emergency preparation, and 3) eruption response and recovery. Ashfall can have varying consequences, such as disabling critical lifeline infrastructure (e.g. electrical generation and transmission, water supplies, telecommunications, aircraft and airports) or merely creating limited and expensive inconvenience to local communities. Impacts to the aviation sector can be a far-reaching global issue. The international volcanic ash impacts community formed a committee to develop a database to catalog the impacts of volcanic ash. We identify three user populations for this database: 1) research teams, who would use the database to assist in systematic collection, recording, and storage of ash impact data, and to prioritize impact assessment trips and lab experiments 2) volcanic risk assessment scientists who rely on impact data for assessments (especially vulnerability/fragility assessments); a complete dataset would have utility for global, regional, national and local scale risk assessments, and 3) citizen science volcanic hazard reporting. Publication of an international ash impacts database will encourage standardization and development of best practices for collecting and reporting impact information. Data entered will be highly categorized, searchable, and open source. Systematic cataloging of impact data will allow users to query the data and extract valuable information to aid in the development of improved emergency preparedness, response and recovery measures.

  19. A web-based audiometry database system.

    Science.gov (United States)

    Yeh, Chung-Hui; Wei, Sung-Tai; Chen, Tsung-Wen; Wang, Ching-Yuang; Tsai, Ming-Hsui; Lin, Chia-Der

    2014-07-01

    To establish a real-time, web-based, customized audiometry database system, we worked in cooperation with the departments of medical records, information technology, and otorhinolaryngology at our hospital. This system includes an audiometry data entry system, retrieval and display system, patient information incorporation system, audiometry data transmission program, and audiometry data integration. Compared with commercial audiometry systems and traditional hand-drawn audiometry data, this web-based system saves time and money and is convenient for statistics research. Copyright © 2013. Published by Elsevier B.V.

  20. FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE

    Directory of Open Access Journals (Sweden)

    Etienne Decencière

    2014-08-01

    Full Text Available The Messidor database, which contains hundreds of eye fundus images, has been publicly distributed since 2008. It was created by the Messidor project in order to evaluate automatic lesion segmentation and diabetic retinopathy grading methods. Designing, producing and maintaining such a database entails significant costs. By publicly sharing it, one hopes to bring a valuable resource to the public research community. However, the real interest and benefit of the research community is not easy to quantify. We analyse here the feedback on the Messidor database, after more than 6 years of diffusion. This analysis should apply to other similar research databases.

  1. MetaBase—the wiki-database of biological databases

    Science.gov (United States)

    Bolser, Dan M.; Chibon, Pierre-Yves; Palopoli, Nicolas; Gong, Sungsam; Jacob, Daniel; Angel, Victoria Dominguez Del; Swan, Dan; Bassi, Sebastian; González, Virginia; Suravajhala, Prashanth; Hwang, Seungwoo; Romano, Paolo; Edwards, Rob; Bishop, Bryan; Eargle, John; Shtatland, Timur; Provart, Nicholas J.; Clements, Dave; Renfro, Daniel P.; Bhak, Daeui; Bhak, Jong

    2012-01-01

    Biology is generating more data than ever. As a result, there is an ever increasing number of publicly available databases that analyse, integrate and summarize the available data, providing an invaluable resource for the biological community. As this trend continues, there is a pressing need to organize, catalogue and rate these resources, so that the information they contain can be most effectively exploited. MetaBase (MB) (http://MetaDatabase.Org) is a community-curated database containing more than 2000 commonly used biological databases. Each entry is structured using templates and can carry various user comments and annotations. Entries can be searched, listed, browsed or queried. The database was created using the same MediaWiki technology that powers Wikipedia, allowing users to contribute on many different levels. The initial release of MB was derived from the content of the 2007 Nucleic Acids Research (NAR) Database Issue. Since then, approximately 100 databases have been manually collected from the literature, and users have added information for over 240 databases. MB is synchronized annually with the static Molecular Biology Database Collection provided by NAR. To date, there have been 19 significant contributors to the project; each one is listed as an author here to highlight the community aspect of the project. PMID:22139927

  2. Title XVI / Supplemental Security Record Point In Time (SSRPT)

    Data.gov (United States)

    Social Security Administration — This is the point-in-time database to house temporary Supplemental Security Record (SSR) images produced during the course of the operating day before they can be...

  3. DRAG: a database for recognition and analasys of gait

    Science.gov (United States)

    Kuchi, Prem; Hiremagalur, Raghu Ram V.; Huang, Helen; Carhart, Michael; He, Jiping; Panchanathan, Sethuraman

    2003-11-01

    A novel approach is proposed for creating a standardized and comprehensive database for gait analysis. The field of gait analysis is gaining increasing attention for applications such as visual surveillance, human-computer interfaces, and gait recognition and rehabilitation. Numerous algorithms have been developed for analyzing and processing gait data; however, a standard database for their systematic evaluation does not exist. Instead, existing gait databases consist of subsets of kinematic, kinetic, and electromyographic activity recordings by different investigators, at separate laboratories, and under varying conditions. Thus, the existing databases are neither homogenous nor sufficiently populated to statistically validate the algorithms. In this paper, a methodology for creating a database is presented, which can be used as a common ground to test the performance of algorithms that rely upon external marker data, ground reaction loading data, and/or video images. The database consists of: (1) synchronized motion-capture data (3D marker data) obtained using external markers, (2) computed joint angles, and (3) ground reaction loading acquired with plantar pressure insoles. This database could be easily expanded to include synchronized video, which will facilitate further development of video-based algorithms for motion tracking. This eventually could lead to the realization of markerless gait tracking. Such a system would have extensive applications in gait recognition, as well as gait rehabilitation. The entire database (marker, angle, and force data) will be placed in the public domain, and made available for downloads over the World Wide Web.

  4. Equipped Search Results Using Machine Learning from Web Databases

    Directory of Open Access Journals (Sweden)

    Ahmed Mudassar Ali

    2015-05-01

    Full Text Available Aim of this study is to form a cluster of search results based on similarity and to assign meaningful label to it Database driven web pages play a vital role in multiple domains like online shopping, e-education systems, cloud computing and other. Such databases are accessible through HTML forms and user interfaces. They return the result pages come from the underlying databases as per the nature of the user query. Such types of databases are termed as Web Databases (WDB. Web databases have been frequently employed to search the products online for retail industry. They can be private to a retailer/concern or publicly used by a number of retailers. Whenever the user queries these databases using keywords, most of the times the user will be deviated by the search results returned. The reason is no relevance exists between the keyword and SRs (Search Results. A typical web page returned from a WDB has multiple Search Result Records (SRRs. An easier way is to group the similar SRRs into one cluster in such a way the user can be more focused on his demand. The key concept of this paper is XML technologies. In this study, we propose a novel system called CSR (Clustering Search Results which extracts the data from the XML database and clusters them based on the similarity and finally assigns meaningful label for it. So, the output of the keyword entered will be the clusters containing related data items.

  5. 7 CFR 1437.7 - Records.

    Science.gov (United States)

    2010-01-01

    ... provide documentary evidence acceptable to CCC of production and the date harvest was completed, including production of crops planted after the planting period or late planting period. Such documentary evidence must.... Records of a previous crop year's production for inclusion in the actual production history database...

  6. CARISPLAN Procedure Manual: Institutional Author Record Card.

    Science.gov (United States)

    United Nations Economic Commission for Latin America, Port-of-Spain (Trinadad). Caribbean Documentation Centre.

    This manual outlines the prescribed format and content of an institutional author record card, which forms the basis of the CARISPLAN (Caribbean Information System for Economic and Social Planning) Institutional Author Authority File used in conjunction with the databases of the United Nations (UN) Economic Commission for Latin America (ECLA).…

  7. Replikasi Unidirectional pada Heterogen Database

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2013-12-01

    Full Text Available The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the interaction process is done through repeated. From this research it is obtained that the database replication technolgy using Oracle Golden Gate can be applied in heterogeneous environments in real time as well.

  8. Moving Observer Support for Databases

    DEFF Research Database (Denmark)

    Bukauskas, Linas

    Interactive visual data explorations impose rigid requirements on database and visualization systems. Systems that visualize huge amounts of data tend to request large amounts of memory resources and heavily use the CPU to process and visualize data. Current systems employ a loosely coupled...... architecture to exchange data between database and visualization. Thus, the interaction of the visualizer and the database is kept to the minimum, which most often leads to superfluous data being passed from database to visualizer. This Ph.D. thesis presents a novel tight coupling of database and visualizer...... together with the VR-tree enables the fast extraction of appearing and disappearing objects from the observer's view as he navigates through the data space. Usage of VAST structure significantly reduces the number of objects to be extracted from the VR-tree and VAST enables a fast interaction of database...

  9. Use of Microcomputers for School Hearing Screening and Evaluation Records.

    Science.gov (United States)

    Jackson, Coleen O'Rourke

    A pilot project evaluated the use of a microcomputer database system to maintain hearing screening, evaluation, and followup records in a school for physically, emotionally, or educationally handicapped children (6 months-18 years). Using a universal database management system for a microcomputer, a program was designed which would allow for easy…

  10. The Danish Collaborative Bacteraemia Network (DACOBAN database

    Directory of Open Access Journals (Sweden)

    Gradel KO

    2014-09-01

    Full Text Available Kim Oren Gradel,1,2 Henrik Carl Schønheyder,3,4 Magnus Arpi,5 Jenny Dahl Knudsen,6 Christian Østergaard,6 Mette Søgaard7For the Danish Collaborative Bacteraemia Network (DACOBAN 1Center for Clinical Epidemiology, Odense University Hospital, 2Research Unit of Clinical Epidemiology, Institute of Clinical Research, University of Southern Denmark, Odense, Denmark; 3Department of Clinical Microbiology, Aalborg University Hospital, 4Department of Clinical Medicine, Aalborg University, Aalborg, 5Department of Clinical Microbiology, Herlev Hospital, Copenhagen University Hospital, Herlev, 6Department of Clinical Microbiology, Hvidovre Hospital, Copenhagen University Hospital, Hvidovre, 7Department of Clinical Epidemiology, Institute of Clinical Medicine, Aarhus University Hospital, Aarhus University, Aarhus, Denmark Abstract: The Danish Collaborative Bacteraemia Network (DACOBAN research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32% of the Danish population. The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia registries from five developed countries on three continents. The main purpose of the DACOBAN database is to study surveillance, risk, and prognosis. Sex- and age-specific data on background populations enables the computation of incidence rates. In

  11. Inorganic Crystal Structure Database (ICSD)

    Science.gov (United States)

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  12. References for Galaxy Clusters Database

    OpenAIRE

    Kalinkov, M.; Valtchanov, I.; Kuneva, I.

    1998-01-01

    A bibliographic database will be constructed with the purpose to be a general tool for searching references for galaxy clusters. The structure of the database will be completely different from the available now databases as NED, SIMBAD, LEDA. Search based on hierarchical keyword system will be performed through web interfaces from numerous bibliographic sources -- journal articles, preprints, unpublished results and papers, theses, scientific reports. Data from the very beginning of the extra...

  13. Database Engines: Evolution of Greenness

    OpenAIRE

    Miranskyy, Andriy V.; Al-zanbouri, Zainab; Godwin, David; Bener, Ayse Basar

    2017-01-01

    Context: Information Technology consumes up to 10\\% of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23% of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energ...

  14. Database design for a kindergarten Pastelka

    OpenAIRE

    Grombíř, Tomáš

    2010-01-01

    This bachelor thesis deals with analysis, creation of database for a kindergarten and installation of the designed database into the database system MySQL. Functionality of the proposed database was verified through an application written in PHP.

  15. Cloud database development and management

    CERN Document Server

    Chao, Lee

    2013-01-01

    Nowadays, cloud computing is almost everywhere. However, one can hardly find a textbook that utilizes cloud computing for teaching database and application development. This cloud-based database development book teaches both the theory and practice with step-by-step instructions and examples. This book helps readers to set up a cloud computing environment for teaching and learning database systems. The book will cover adequate conceptual content for students and IT professionals to gain necessary knowledge and hands-on skills to set up cloud based database systems.

  16. The Danish Testicular Cancer database

    Directory of Open Access Journals (Sweden)

    Daugaard G

    2016-10-01

    Full Text Available Gedske Daugaard,1 Maria Gry Gundgaard Kier,1 Mikkel Bandak,1 Mette Saksø Mortensen,1 Heidi Larsson,2 Mette Søgaard,2 Birgitte Groenkaer Toft,3 Birte Engvad,4 Mads Agerbæk,5 Niels Vilstrup Holm,6 Jakob Lauritsen1 1Department of Oncology 5073, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 2Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 3Department of Pathology, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 4Department of Pathology, Odense University Hospital, Odense, 5Department of Oncology, Aarhus University Hospital, Aarhus, 6Department of Oncology, Odense University Hospital, Odense, Denmark Aim: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database. The aim is to improve the quality of care for patients with testicular cancer (TC in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. Study population: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. Main variables and descriptive data: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions

  17. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  18. Working with Documents in Databases

    Directory of Open Access Journals (Sweden)

    Marian DARDALA

    2008-01-01

    Full Text Available Using on a larger and larger scale the electronic documents within organizations and public institutions requires their storage and unitary exploitation by the means of databases. The purpose of this article is to present the way of loading, exploitation and visualization of documents in a database, taking as example the SGBD MSSQL Server. On the other hand, the modules for loading the documents in the database and for their visualization will be presented through code sequences written in C#. The interoperability between averages will be carried out by the means of ADO.NET technology of database access.

  19. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  20. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  1. Public chemical compound databases.

    Science.gov (United States)

    Williams, Anthony J

    2008-05-01

    The internet has rapidly become the first port of call for all information searches. The increasing array of chemistry-related resources that are now available provides chemists with a direct path to the information that was previously accessed via library services and was limited by commercial and costly resources. The diversity of the information that can be accessed online is expanding at a dramatic rate, and the support for publicly available resources offers significant opportunities in terms of the benefits to science and society. While the data online do not generally meet the quality standards of manually curated sources, there are efforts underway to gather scientists together and 'crowdsource' an improvement in the quality of the available data. This review discusses the types of public compound databases that are available online and provides a series of examples. Focus is also given to the benefits and disruptions associated with the increased availability of such data and the integration of technologies to data mine this information.

  2. ASSOCIATION RULES IN HORIZONTALLY DISTRIBUTED DATABASES WITH ENHANCED SECURE MINING

    Directory of Open Access Journals (Sweden)

    Sonal Patil

    2015-10-01

    Full Text Available Recent developments in information technology have made possible the collection and analysis of millions of transactions containing personal data. These data include shopping habits, criminal records, medical histories and credit records among others. In the term of distributed database, distributed database is a database in which storage devices are not all attached to a common processing unit such as the CPU controlled by a distributed database management system (together sometimes called a distributed database system. It may be stored in multiple computers located in the same physical location or may be dispersed over a network of interconnected computers. A protocol has been proposed for secure mining of association rules in horizontally distributed databases. This protocol is optimized than the Fast Distributed Mining (FDM algorithm which is an unsecured distributed version of the Apriori algorithm. The main purpose of this protocol is to remove the problem of mining generalized association rules that affects the existing system. This protocol offers more enhanced privacy with respect to previous protocols. In addition it is simpler and is optimized in terms of communication rounds, communication cost and computational cost than other protocols.

  3. Medical databases in studies of drug teratogenicity: methodological issues.

    Science.gov (United States)

    Ehrenstein, Vera; Sørensen, Henrik T; Bakketeig, Leiv S; Pedersen, Lars

    2010-01-01

    More than half of all pregnant women take prescription medications, raising concerns about fetal safety. Medical databases routinely collecting data from large populations are potentially valuable resources for cohort studies addressing teratogenicity of drugs. These include electronic medical records, administrative databases, population health registries, and teratogenicity information services. Medical databases allow estimation of prevalences of birth defects with enhanced precision, but systematic error remains a potentially serious problem. In this review, we first provide a brief description of types of North American and European medical databases suitable for studying teratogenicity of drugs and then discuss manifestation of systematic errors in teratogenicity studies based on such databases. Selection bias stems primarily from the inability to ascertain all reproductive outcomes. Information bias (misclassification) may be caused by paucity of recorded clinical details or incomplete documentation of medication use. Confounding, particularly confounding by indication, can rarely be ruled out. Bias that either masks teratogenicity or creates false appearance thereof, may have adverse consequences for the health of the child and the mother. Biases should be quantified and their potential impact on the study results should be assessed. Both theory and software are available for such estimation. Provided that methodological problems are understood and effectively handled, computerized medical databases are a valuable source of data for studies of teratogenicity of drugs.

  4. The use of databases and registries to enhance colonoscopy quality.

    Science.gov (United States)

    Logan, Judith R; Lieberman, David A

    2010-10-01

    Administrative databases, registries, and clinical databases are designed for different purposes and therefore have different advantages and disadvantages in providing data for enhancing quality. Administrative databases provide the advantages of size, availability, and generalizability, but are subject to constraints inherent in the coding systems used and from data collection methods optimized for billing. Registries are designed for research and quality reporting but require significant investment from participants for secondary data collection and quality control. Electronic health records contain all of the data needed for quality research and measurement, but that data is too often locked in narrative text and unavailable for analysis. National mandates for electronic health record implementation and functionality will likely change this landscape in the near future.

  5. Five Librarians Talk about Quality Control and the OCLC Database.

    Science.gov (United States)

    Helge, Brian; And Others

    1987-01-01

    Five librarians considered authorities on quality cataloging in the OCLC Online Union Catalog were interviewed to obtain their views on the current level of quality control in the OCLC database, the responsibilities of OCLC and individual libraries in improving the quality of records, and the consequences of quality control problems. (CLB)

  6. Database Application for a Youth Market Livestock Production Education Program

    Science.gov (United States)

    Horney, Marc R.

    2013-01-01

    This article offers an example of a database designed to support teaching animal production and husbandry skills in county youth livestock programs. The system was used to manage production goals, animal growth and carcass data, photos and other imagery, and participant records. These were used to produce a variety of customized reports to help…

  7. SM-ROM-GL (Strong Motion Romania Ground Level Database

    Directory of Open Access Journals (Sweden)

    Ioan Sorin BORCIA

    2015-07-01

    Full Text Available The SM-ROM-GL database includes data obtained by the processing of records performed at ground level by the Romanian seismic networks, namely INCERC, NIEP, NCSRR and ISPH-GEOTEC, during recent seismic events with moment magnitude Mw ≥ 5 and epicenters located in Romania. All the available seismic records were re-processed using the same basic software and the same procedures and options (filtering and baseline correction, in order to obtain a consistent dataset. The database stores computed parameters of seismic motions, i.e. peak values: PGA, PGV, PGD, effective peak values: EPA, EPV, EPD, control periods, spectral values of absolute acceleration, relative velocity and relative displacement, as well as of instrumental intensity (as defined bz Sandi and Borcia in 2011. The fields in the database include: coding of seismic events, stations and records, a number of associated fields (seismic event source parameters, geographical coordinates of seismic stations, links to the corresponding ground motion records, charts of the response spectra of absolute acceleration, relative velocity, relative displacement and instrumental intensity, as well as some other representative parameters of seismic motions. The conception of the SM-ROM-GL database allows for an easy maintenance; such that elementary knowledge of Microsoft Access 2000 is sufficient for its operation.

  8. Initiation of a Database of CEUS Ground Motions for NGA East

    Science.gov (United States)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  9. Hanford Site technical baseline database

    Energy Technology Data Exchange (ETDEWEB)

    Porter, P.E.

    1996-09-30

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of September 30, 1996. The cassette tape also includes the delta files that dellinate the differences between this revision and revision 4 (May 10, 1996) of the Hanford Site Technical Baseline Database.

  10. Hanford Site technical baseline database

    Energy Technology Data Exchange (ETDEWEB)

    Porter, P.E., Westinghouse Hanford

    1996-05-10

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of May 10, 1996. The cassette tape also includes the delta files that delineate the differences between this revision and revision 3 (April 10, 1996) of the Hanford Site Technical Baseline Database.

  11. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  12. The UCSC Genome Browser Database

    DEFF Research Database (Denmark)

    Karolchik, D; Kuhn, R M; Baertsch, R

    2008-01-01

    The University of California, Santa Cruz, Genome Browser Database (GBD) provides integrated sequence and annotation data for a large collection of vertebrate and model organism genomes. Seventeen new assemblies have been added to the database in the past year, for a total coverage of 19 vertebrat...

  13. Adaptive Segmentation for Scientific Databases

    NARCIS (Netherlands)

    Ivanova, M.G.; Kersten, M.L.; Nes, N.J.

    2008-01-01

    In this paper we explore database segmentation in the context of a column-store DBMS targeted at a scientific database. We present a novel hardware- and scheme-oblivious segmentation algorithm, which learns and adapts to the workload immediately. The approach taken is to capitalize on (intermediate)

  14. XCOM: Photon Cross Sections Database

    Science.gov (United States)

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  15. Adaptive segmentation for scientific databases

    NARCIS (Netherlands)

    Ivanova, M.; Kersten, M.L.; Nes, N.

    2008-01-01

    In this paper we explore database segmentation in the context of a column-store DBMS targeted at a scientific database. We present a novel hardware- and scheme-oblivious segmentation algorithm, which learns and adapts to the workload immediately. The approach taken is to capitalize on (intermediate)

  16. A Database of Invariant Rings

    OpenAIRE

    Kemper, Gregor; Körding, Elmar; Malle, Gunter; Matzat, B. Heinrich; Vogel, Denis; Wiese, Gabor

    2001-01-01

    We announce the creation of a database of invariant rings. This database contains a large number of invariant rings of finite groups, mostly in the modular case. It gives information on generators and structural properties of the invariant rings. The main purpose is to provide a tool for researchers in invariant theory.

  17. Content independence in multimedia databases

    NARCIS (Netherlands)

    Vries, A.P. de

    2001-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for the design

  18. The Danish Cardiac Rehabilitation Database

    DEFF Research Database (Denmark)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne

    2016-01-01

    AIM OF DATABASE: The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). STUDY POPULATION: Hospitalized patients with CHD with stenosis on coronary angiography treated with percutane...

  19. Automatic assistants for database exploration

    NARCIS (Netherlands)

    Sellam, T.H.J.

    2016-01-01

    Data explorers interrogate a database to discover its content. Their aim is to get an overview of the data and discover interesting new facts. They have little to no knowledge of the data, and their requirements are often vague and abstract. How can such users write database queries? This thesis pre

  20. Storing XML Documents in Databases

    NARCIS (Netherlands)

    Schmidt, A.R.; Manegold, S.; Kersten, M.L.; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    The authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as pos