WorldWideScience

Sample records for common normal database

  1. Normal stress databases in myocardial perfusion scintigraphy – how many subjects do you need?

    DEFF Research Database (Denmark)

    Trägårdh, Elin; Sjöstrand, Karl; Edenbrandt, Lars

    2012-01-01

    ) for male, NC for female, attenuation‐corrected images (AC) for male and AC for female subjects. 126 male and 205 female subjects were included. The normal database was created by alternatingly computing the mean of all normal subjects and normalizing the subjects with respect to this mean, until...... convergence. Coefficients of variation (CV) were created for increasing number of included patients in the four different normal stress databases. Normal stress databases with ...Commercial normal stress databases in myocardial perfusion scintigraphy (MPS) commonly consist of 30–40 individuals. The aim of the study was to determine how many subjects are needed. Four normal stress databases were developed using patients who underwent 99mTc MPS: non‐corrected images (NC...

  2. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  3. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  4. Accounting for the Benefits of Database Normalization

    Science.gov (United States)

    Wang, Ting J.; Du, Hui; Lehmann, Constance M.

    2010-01-01

    This paper proposes a teaching approach to reinforce accounting students' understanding of the concept of database normalization. Unlike a conceptual approach shown in most of the AIS textbooks, this approach involves with calculations and reconciliations with which accounting students are familiar because the methods are frequently used in…

  5. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  6. Towards adapting a normal patient database for SPECT brain perfusion imaging

    International Nuclear Information System (INIS)

    Smith, N D; Soleimani, M; Mitchell, C N; Holmes, R B; Evans, M J; Cade, S C

    2012-01-01

    Single-photon emission computerized tomography (SPECT) is a tool which can be used to image perfusion in the brain. Clinicians can use such images to help diagnose dementias such as Alzheimer's disease. Due to the intrinsic stochasticity in the photon imaging system, some form of statistical comparison of an individual image with a 'normal' patient database gives a clinician additional confidence in interpreting the image. Due to the variations between SPECT camera systems, ideally a normal patient database is required for each individual system. However, cost or ethical considerations often prohibit the collection of such a database for each new camera system. Some method of adapting existing normal patient databases to new camera systems would be beneficial. This paper introduces a method which may be regarded as a 'first-pass' attempt based on 2-norm regularization and a codebook of discrete spatially stationary convolutional kernels. Some preliminary illustrative results are presented, together with discussion on limitations and possible improvements

  7. Small average differences in attenuation corrected images between men and women in myocardial perfusion scintigraphy: a novel normal stress database

    International Nuclear Information System (INIS)

    Trägårdh, Elin; Sjöstrand, Karl; Jakobsson, David; Edenbrandt, Lars

    2011-01-01

    The American Society of Nuclear Cardiology and the Society of Nuclear Medicine state that incorporation of attenuation-corrected (AC) images in myocardial perfusion scintigraphy (MPS) will improve image quality, interpretive certainty, and diagnostic accuracy. However, commonly used software packages for MPS usually include normal stress databases for non-attenuation corrected (NC) images but not for attenuation-corrected (AC) images. The aim of the study was to develop and compare different normal stress databases for MPS in relation to NC vs. AC images, male vs. female gender, and presence vs. absence of obesity. The principal hypothesis was that differences in mean count values between men and women would be smaller with AC than NC images, thereby allowing for construction and use of gender-independent AC stress database. Normal stress perfusion databases were developed with data from 126 male and 205 female patients with normal MPS. The following comparisons were performed for all patients and separately for normal weight vs. obese patients: men vs. women for AC; men vs. women for NC; AC vs. NC for men; and AC vs. NC for women. When comparing AC for men vs. women, only minor differences in mean count values were observed, and there were no differences for normal weight vs. obese patients. For all other analyses major differences were found, particularly for the inferior wall. The results support the hypothesis that it is possible to use not only gender independent but also weight independent AC stress databases

  8. [(123)I]FP-CIT ENC-DAT normal database

    DEFF Research Database (Denmark)

    Tossici-Bolt, Livia; Dickson, John C; Sera, Terez

    2017-01-01

    quantifications methods, BRASS and Southampton, and explores the performance of the striatal phantom calibration in their harmonisation. RESULTS: BRASS and Southampton databases comprising 123 ENC-DAT subjects, from gamma cameras with parallel collimators, were reconstructed using filtered back projection (FBP......) and iterative reconstruction OSEM without corrections (IRNC) and compared against the recommended OSEM with corrections for attenuation and scatter and septal penetration (ACSC), before and after applying phantom calibration. Differences between databases were quantified using the percentage difference......-camera variability (-0.2%, p = 0.44). CONCLUSIONS: The ENC-DAT reference values are significantly dependent on the reconstruction and quantification methods and phantom calibration, while reducing the major part of their differences, is unable to fully harmonize them. Clinical use of any normal database, therefore...

  9. [(123)I]FP-CIT ENC-DAT normal database

    DEFF Research Database (Denmark)

    Tossici-Bolt, Livia; Dickson, John C; Sera, Terez

    2017-01-01

    BACKGROUND: [(123)I]FP-CIT is a well-established radiotracer for the diagnosis of dopaminergic degenerative disorders. The European Normal Control Database of DaTSCAN (ENC-DAT) of healthy controls has provided age and gender-specific reference values for the [(123)I]FP-CIT specific binding ratio...... quantifications methods, BRASS and Southampton, and explores the performance of the striatal phantom calibration in their harmonisation. RESULTS: BRASS and Southampton databases comprising 123 ENC-DAT subjects, from gamma cameras with parallel collimators, were reconstructed using filtered back projection (FBP......) and iterative reconstruction OSEM without corrections (IRNC) and compared against the recommended OSEM with corrections for attenuation and scatter and septal penetration (ACSC), before and after applying phantom calibration. Differences between databases were quantified using the percentage difference...

  10. HemaExplorer: a database of mRNA expression profiles in normal and malignant haematopoiesis

    DEFF Research Database (Denmark)

    Bagger, Frederik Otzen; Rapin, Nicolas; Theilgaard-Mönch, Kim

    2013-01-01

    lead to full integrity of the data in the database. The HemaExplorer has comprehensive visualization interface that can make it useful as a daily tool for biologists and cancer researchers to assess the expression patterns of genes encountered in research or literature. HemaExplorer is relevant for all......The HemaExplorer (http://servers.binf.ku.dk/hemaexplorer) is a curated database of processed mRNA Gene expression profiles (GEPs) that provides an easy display of gene expression in haematopoietic cells. HemaExplorer contains GEPs derived from mouse/human haematopoietic stem and progenitor cells...... as well as from more differentiated cell types. Moreover, data from distinct subtypes of human acute myeloid leukemia is included in the database allowing researchers to directly compare gene expression of leukemic cells with those of their closest normal counterpart. Normalization and batch correction...

  11. A global multiproxy database for temperature reconstructions of the Common Era

    Science.gov (United States)

    Emile-Geay, Julian; McKay, Nicholas P.; Kaufman, Darrell S.; von Gunten, Lucien; Wang, Jianghao; Anchukaitis, Kevin J.; Abram, Nerilie J.; Addison, Jason A.; Curran, Mark A.J.; Evans, Michael N.; Henley, Benjamin J.; Hao, Zhixin; Martrat, Belen; McGregor, Helen V.; Neukom, Raphael; Pederson, Gregory T.; Stenni, Barbara; Thirumalai, Kaustubh; Werner, Johannes P.; Xu, Chenxi; Divine, Dmitry V.; Dixon, Bronwyn C.; Gergis, Joelle; Mundo, Ignacio A.; Nakatsuka, T.; Phipps, Steven J.; Routson, Cody C.; Steig, Eric J.; Tierney, Jessica E.; Tyler, Jonathan J.; Allen, Kathryn J.; Bertler, Nancy A. N.; Bjorklund, Jesper; Chase, Brian M.; Chen, Min-Te; Cook, Ed; de Jong, Rixt; DeLong, Kristine L.; Dixon, Daniel A.; Ekaykin, Alexey A.; Ersek, Vasile; Filipsson, Helena L.; Francus, Pierre; Freund, Mandy B.; Frezzotti, M.; Gaire, Narayan P.; Gajewski, Konrad; Ge, Quansheng; Goosse, Hugues; Gornostaeva, Anastasia; Grosjean, Martin; Horiuchi, Kazuho; Hormes, Anne; Husum, Katrine; Isaksson, Elisabeth; Kandasamy, Selvaraj; Kawamura, Kenji; Koc, Nalan; Leduc, Guillaume; Linderholm, Hans W.; Lorrey, Andrew M.; Mikhalenko, Vladimir; Mortyn, P. Graham; Motoyama, Hideaki; Moy, Andrew D.; Mulvaney, Robert; Munz, Philipp M.; Nash, David J.; Oerter, Hans; Opel, Thomas; Orsi, Anais J.; Ovchinnikov, Dmitriy V.; Porter, Trevor J.; Roop, Heidi; Saenger, Casey; Sano, Masaki; Sauchyn, David; Saunders, K.M.; Seidenkrantz, Marit-Solveig; Severi, Mirko; Shao, X.; Sicre, Marie-Alexandrine; Sigl, Michael; Sinclair, Kate; St. George, Scott; St. Jacques, Jeannine-Marie; Thamban, Meloth; Thapa, Udya Kuwar; Thomas, E.; Turney, Chris; Uemura, Ryu; Viau, A.E.; Vladimirova, Diana O.; Wahl, Eugene; White, James W. C.; Yu, Z.; Zinke, Jens

    2017-01-01

    Reproducible climate reconstructions of the Common Era (1 CE to present) are key to placing industrial-era warming into the context of natural climatic variability. Here we present a community-sourced database of temperature-sensitive proxy records from the PAGES2k initiative. The database gathers 692 records from 648 locations, including all continental regions and major ocean basins. The records are from trees, ice, sediment, corals, speleothems, documentary evidence, and other archives. They range in length from 50 to 2000 years, with a median of 547 years, while temporal resolution ranges from biweekly to centennial. Nearly half of the proxy time series are significantly correlated with HadCRUT4.2 surface temperature over the period 1850–2014. Global temperature composites show a remarkable degree of coherence between high- and low-resolution archives, with broadly similar patterns across archive types, terrestrial versus marine locations, and screening criteria. The database is suited to investigations of global and regional temperature variability over the Common Era, and is shared in the Linked Paleo Data (LiPD) format, including serializations in Matlab, R and Python.

  12. HemaExplorer: a database of mRNA expression profiles in normal and malignant haematopoiesis

    DEFF Research Database (Denmark)

    Bagger, Frederik Otzen; Rapin, Nicolas; Theilgaard-Mönch, Kim

    2013-01-01

    as well as from more differentiated cell types. Moreover, data from distinct subtypes of human acute myeloid leukemia is included in the database allowing researchers to directly compare gene expression of leukemic cells with those of their closest normal counterpart. Normalization and batch correction...... lead to full integrity of the data in the database. The HemaExplorer has comprehensive visualization interface that can make it useful as a daily tool for biologists and cancer researchers to assess the expression patterns of genes encountered in research or literature. HemaExplorer is relevant for all...... research within the fields of leukemia, immunology, cell differentiation and the biology of the haematopoietic system....

  13. Towards a common thermodynamic database for speciation models

    International Nuclear Information System (INIS)

    Lee, J. van der; Lomenech, C.

    2004-01-01

    Bio-geochemical speciation models and reactive transport models are reaching an operational stage, allowing simulation of complex dynamic experiments and description of field observations. For decades, the main focus has been on model performance but at present, the availability and reliability of thermodynamic data is the limiting factor of the models. Thermodynamic models applied to real and complex geochemical systems require much more extended thermodynamic databases with many minerals, colloidal phases, humic and fulvic acids, cementitious phases and (dissolved) organic complexing agents. Here we propose a methodological approach to achieve, ultimately, a common, operational database including the reactions and constants of these phases. Provided they are coherent with the general thermodynamic laws, sorption reactions are included as well. We therefore focus on sorption reactions and parameter values associated with specific sorption models. The case of sorption on goethite has been used to illustrate the way the methodology handles the problem of inconsistency and data quality. (orig.)

  14. Contributions to Logical Database Design

    Directory of Open Access Journals (Sweden)

    Vitalie COTELEA

    2012-01-01

    Full Text Available This paper treats the problems arising at the stage of logical database design. It comprises a synthesis of the most common inference models of functional dependencies, deals with the problems of building covers for sets of functional dependencies, makes a synthesizes of normal forms, presents trends regarding normalization algorithms and provides a temporal complexity of those. In addition, it presents a summary of the most known keys’ search algorithms, deals with issues of analysis and testing of relational schemes. It also summarizes and compares the different features of recognition of acyclic database schemas.

  15. Insights About Emergency Diesel Generator Failures from the USNRC's Common Cause Failure Database

    International Nuclear Information System (INIS)

    Mosleh, A.; Rasmuson, D.; Marshall, F.; Wierman, T.

    1999-01-01

    The US Nuclear Regulatory Commission has sponsored development of a database of common cause failure events for use in commercial nuclear power plant risk and reliability analyses. This paper presents a summary of the results from analysis of the emergency diesel generator data from the database. The presentation is limited to the overall insights, the design and manufacturing cause and the instrumentation and control sub-system

  16. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  17. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  18. Digital Dental X-ray Database for Caries Screening

    Science.gov (United States)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  19. CT-based attenuation correction and resolution compensation for I-123 IMP brain SPECT normal database: a multicenter phantom study.

    Science.gov (United States)

    Inui, Yoshitaka; Ichihara, Takashi; Uno, Masaki; Ishiguro, Masanobu; Ito, Kengo; Kato, Katsuhiko; Sakuma, Hajime; Okazawa, Hidehiko; Toyama, Hiroshi

    2018-03-19

    Statistical image analysis of brain SPECT images has improved diagnostic accuracy for brain disorders. However, the results of statistical analysis vary depending on the institution even when they use a common normal database (NDB), due to different intrinsic spatial resolutions or correction methods. The present study aimed to evaluate the correction of spatial resolution differences between equipment and examine the differences in skull bone attenuation to construct a common NDB for use in multicenter settings. The proposed acquisition and processing protocols were those routinely used at each participating center with additional triple energy window (TEW) scatter correction (SC) and computed tomography (CT) based attenuation correction (CTAC). A multicenter phantom study was conducted on six imaging systems in five centers, with either single photon emission computed tomography (SPECT) or SPECT/CT, and two brain phantoms. The gray/white matter I-123 activity ratio in the brain phantoms was 4, and they were enclosed in either an artificial adult male skull, 1300 Hounsfield units (HU), a female skull, 850 HU, or an acrylic cover. The cut-off frequency of the Butterworth filters was adjusted so that the spatial resolution was unified to a 17.9 mm full width at half maximum (FWHM), that of the lowest resolution system. The gray-to-white matter count ratios were measured from SPECT images and compared with the actual activity ratio. In addition, mean, standard deviation and coefficient of variation images were calculated after normalization and anatomical standardization to evaluate the variability of the NDB. The gray-to-white matter count ratio error without SC and attenuation correction (AC) was significantly larger for higher bone densities (p correction. The proposed protocol showed potential for constructing an appropriate common NDB from SPECT images with SC, AC and spatial resolution compensation.

  20. The impact of reconstruction and scanner characterisation on the diagnostic capability of a normal database for [(123)I]FP-CIT SPECT imaging

    DEFF Research Database (Denmark)

    Dickson, John C; Tossici-Bolt, Livia; Sera, Terez

    2017-01-01

    for scan normality using the ENC-DAT normal database obtained in well-documented healthy subjects. Patient and normal data were reconstructed with iterative reconstruction with correction for attenuation, scatter and septal penetration (ACSC), the same reconstruction without corrections (IRNC......), and filtered back-projection (FBP) with data quantified using small volume-of-interest (VOI) (BRASS) and large VOI (Southampton) analysis methods. Test performance was assessed with and without system characterisation, using receiver operating characteristics (ROC) analysis for age-independent data and using......BACKGROUND: The use of a normal database for [(123)I]FP-CIT SPECT imaging has been found to be helpful for cases which are difficult to interpret by visual assessment alone, and to improve reproducibility in scan interpretation. The aim of this study was to assess whether the use of different...

  1. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  2. A database of annotated promoters of genes associated with common respiratory and related diseases

    KAUST Repository

    Chowdhary, Rajesh

    2012-07-01

    Many genes have been implicated in the pathogenesis of common respiratory and related diseases (RRDs), yet the underlying mechanisms are largely unknown. Differential gene expression patterns in diseased and healthy individuals suggest that RRDs affect or are affected by modified transcription regulation programs. It is thus crucial to characterize implicated genes in terms of transcriptional regulation. For this purpose, we conducted a promoter analysis of genes associated with 11 common RRDs including allergic rhinitis, asthma, bronchiectasis, bronchiolitis, bronchitis, chronic obstructive pulmonary disease, cystic fibrosis, emphysema, eczema, psoriasis, and urticaria, many of which are thought to be genetically related. The objective of the present study was to obtain deeper insight into the transcriptional regulation of these disease-associated genes by annotating their promoter regions with transcription factors (TFs) and TF binding sites (TFBSs). We discovered many TFs that are significantly enriched in the target disease groups including associations that have been documented in the literature. We also identified a number of putative TFs/TFBSs that appear to be novel. The results of our analysis are provided in an online database that is freely accessible to researchers at http://www.respiratorygenomics.com. Promoter-associated TFBS information and related genomic features, such as histone modification sites, microsatellites, CpG islands, and SNPs, are graphically summarized in the database. Users can compare and contrast underlying mechanisms of specific RRDs relative to candidate genes, TFs, gene ontology terms, micro-RNAs, and biological pathways for the conduct of metaanalyses. This database represents a novel, useful resource for RRD researchers. Copyright © 2012 by the American Thoracic Society.

  3. A database of annotated promoters of genes associated with common respiratory and related diseases

    KAUST Repository

    Chowdhary, Rajesh; Tan, Sinlam; Pavesi, Giulio; Jin, Gg; Dong, Difeng; Mathur, Sameer K.; Burkart, Arthur; Narang, Vipin; Glurich, Ingrid E.; Raby, Benjamin A.; Weiss, Scott T.; Limsoon, Wong; Liu, Jun; Bajic, Vladimir B.

    2012-01-01

    Many genes have been implicated in the pathogenesis of common respiratory and related diseases (RRDs), yet the underlying mechanisms are largely unknown. Differential gene expression patterns in diseased and healthy individuals suggest that RRDs affect or are affected by modified transcription regulation programs. It is thus crucial to characterize implicated genes in terms of transcriptional regulation. For this purpose, we conducted a promoter analysis of genes associated with 11 common RRDs including allergic rhinitis, asthma, bronchiectasis, bronchiolitis, bronchitis, chronic obstructive pulmonary disease, cystic fibrosis, emphysema, eczema, psoriasis, and urticaria, many of which are thought to be genetically related. The objective of the present study was to obtain deeper insight into the transcriptional regulation of these disease-associated genes by annotating their promoter regions with transcription factors (TFs) and TF binding sites (TFBSs). We discovered many TFs that are significantly enriched in the target disease groups including associations that have been documented in the literature. We also identified a number of putative TFs/TFBSs that appear to be novel. The results of our analysis are provided in an online database that is freely accessible to researchers at http://www.respiratorygenomics.com. Promoter-associated TFBS information and related genomic features, such as histone modification sites, microsatellites, CpG islands, and SNPs, are graphically summarized in the database. Users can compare and contrast underlying mechanisms of specific RRDs relative to candidate genes, TFs, gene ontology terms, micro-RNAs, and biological pathways for the conduct of metaanalyses. This database represents a novel, useful resource for RRD researchers. Copyright © 2012 by the American Thoracic Society.

  4. GENISES: A GIS Database for the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Beckett, J.

    1991-01-01

    This paper provides a general description of the Geographic Nodal Information Study and Evaluation System (GENISES) database design. The GENISES database is the Geographic Information System (GIS) component of the Yucca Mountain Site Characterization Project Technical Database (TDB). The GENISES database has been developed and is maintained by EG ampersand G Energy Measurements, Inc., Las Vegas, NV (EG ampersand G/EM). As part of the Yucca Mountain Project (YMP) Site Characterization Technical Data Management System, GENISES provides a repository for geographically oriented technical data. The primary objective of the GENISES database is to support the Yucca Mountain Site Characterization Project with an effective tool for describing, analyzing, and archiving geo-referenced data. The database design provides the maximum efficiency in input/output, data analysis, data management and information display. This paper provides the systematic approach or plan for the GENISES database design and operation. The paper also discusses the techniques used for data normalization or the decomposition of complex data structures as they apply to GIS database. ARC/INFO and INGRES files are linked or joined by establishing ''relate'' fields through the common attribute names. Thus, through these keys, ARC can allow access to normalized INGRES files greatly reducing redundancy and the size of the database

  5. Identification of Common Cause Initiating Events Using the NEA IRS Database. Rev 0

    International Nuclear Information System (INIS)

    Kulig, Maciej; Tomic, Bojan; Nyman, R alph

    2007-02-01

    The study presented in this report is a continuation of work conducted for SKI in 1998 on the identification of Common Cause Initiators (CCIs) based on operational events documented in the NEA Incident Reporting System (IRS). Based on the new operational experience accumulated in IRS in the period 1995-2006, the project focused on the identification of new CCI events. An attempt was also made to compare the observations made in the earlier study with the results of the current work. The earlier study and the current project cover the events reported in the IRS database with the incident date in the period from 01.01.1980 to 15.11.2006. The review of the NEA IRS database conducted within this project generated a sample of events that provides insights regarding the Common Cause Initiators (CCIs). This list includes certain number of 'real' CCIs but also potential CCIs and other events that provide insights on potential dependency mechanisms. Relevant characteristics of the events were analysed in the context of CCIs. This evaluation was intended to investigate the importance of the CCI issue and also to provide technical insights that could help in the modelling the CCIs in PSAs. The analysis of operational events provided useful engineering insights regarding the potential dependencies that may originate CCIs. Some indications were also obtained on the plant SSCs/areas that are susceptible to common cause failures. Direct interrelations between the accident mitigation systems through common support systems, which can originate a CCI, represent a dominant dependency mechanism involved in the CCI events. The most important contributors of this type are electrical power supply systems and I-and-C systems. Area-related events (fire, flood, water spray), external hazards (lightning, high wind or cold weather) and transients (water hammer, electrical transients both internal and external) have also been found to be important sources of dependency that may originate CCIs

  6. Identification of Common Cause Initiating Events Using the NEA IRS Database. Rev 0

    Energy Technology Data Exchange (ETDEWEB)

    Kulig, Maciej; Tomic, Bojan (Enconet Consulting, Vienna (Austria)); Nyman, Ralph (Swedish Nuclear Power Inspectorate, Stockholm (Sweden))

    2007-02-15

    The study presented in this report is a continuation of work conducted for SKI in 1998 on the identification of Common Cause Initiators (CCIs) based on operational events documented in the NEA Incident Reporting System (IRS). Based on the new operational experience accumulated in IRS in the period 1995-2006, the project focused on the identification of new CCI events. An attempt was also made to compare the observations made in the earlier study with the results of the current work. The earlier study and the current project cover the events reported in the IRS database with the incident date in the period from 01.01.1980 to 15.11.2006. The review of the NEA IRS database conducted within this project generated a sample of events that provides insights regarding the Common Cause Initiators (CCIs). This list includes certain number of 'real' CCIs but also potential CCIs and other events that provide insights on potential dependency mechanisms. Relevant characteristics of the events were analysed in the context of CCIs. This evaluation was intended to investigate the importance of the CCI issue and also to provide technical insights that could help in the modelling the CCIs in PSAs. The analysis of operational events provided useful engineering insights regarding the potential dependencies that may originate CCIs. Some indications were also obtained on the plant SSCs/areas that are susceptible to common cause failures. Direct interrelations between the accident mitigation systems through common support systems, which can originate a CCI, represent a dominant dependency mechanism involved in the CCI events. The most important contributors of this type are electrical power supply systems and I-and-C systems. Area-related events (fire, flood, water spray), external hazards (lightning, high wind or cold weather) and transients (water hammer, electrical transients both internal and external) have also been found to be important sources of dependency that may

  7. Comparison of conventional and cadmium-zinc-telluride single-photon emission computed tomography for analysis of thallium-201 myocardial perfusion imaging: an exploratory study in normal databases for different ethnicities.

    Science.gov (United States)

    Ishihara, Masaru; Onoguchi, Masahisa; Taniguchi, Yasuyo; Shibutani, Takayuki

    2017-12-01

    The aim of this study was to clarify the differences in thallium-201-chloride (thallium-201) myocardial perfusion imaging (MPI) scans evaluated by conventional anger-type single-photon emission computed tomography (conventional SPECT) versus cadmium-zinc-telluride SPECT (CZT SPECT) imaging in normal databases for different ethnic groups. MPI scans from 81 consecutive Japanese patients were examined using conventional SPECT and CZT SPECT and analyzed with the pre-installed quantitative perfusion SPECT (QPS) software. We compared the summed stress score (SSS), summed rest score (SRS), and summed difference score (SDS) for the two SPECT devices. For a normal MPI reference, we usually use Japanese databases for MPI created by the Japanese Society of Nuclear Medicine, which can be used with conventional SPECT but not with CZT SPECT. In this study, we used new Japanese normal databases constructed in our institution to compare conventional and CZT SPECT. Compared with conventional SPECT, CZT SPECT showed lower SSS (p < 0.001), SRS (p = 0.001), and SDS (p = 0.189) using the pre-installed SPECT database. In contrast, CZT SPECT showed no significant difference from conventional SPECT in QPS analysis using the normal databases from our institution. Myocardial perfusion analyses by CZT SPECT should be evaluated using normal databases based on the ethnic group being evaluated.

  8. Evolution of a Structure-Searchable Database into a Prototype for a High-Fidelity SmartPhone App for 62 Common Pesticides Used in Delaware.

    Science.gov (United States)

    D'Souza, Malcolm J; Barile, Benjamin; Givens, Aaron F

    2015-05-01

    Synthetic pesticides are widely used in the modern world for human benefit. They are usually classified according to their intended pest target. In Delaware (DE), approximately 42 percent of the arable land is used for agriculture. In order to manage insectivorous and herbaceous pests (such as insects, weeds, nematodes, and rodents), pesticides are used profusely to biologically control the normal pest's life stage. In this undergraduate project, we first created a usable relational database containing 62 agricultural pesticides that are common in Delaware. Chemically pertinent quantitative and qualitative information was first stored in Bio-Rad's KnowItAll® Informatics System. Next, we extracted the data out of the KnowItAll® system and created additional sections on a Microsoft® Excel spreadsheet detailing pesticide use(s) and safety and handling information. Finally, in an effort to promote good agricultural practices, to increase efficiency in business decisions, and to make pesticide data globally accessible, we developed a mobile application for smartphones that displayed the pesticide database using Appery.io™; a cloud-based HyperText Markup Language (HTML5), jQuery Mobile and Hybrid Mobile app builder.

  9. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  10. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  11. Cooperation for Common Use of SEE Astronomical Database as a Regional Virtual Observatory in Different Scientific Projects

    Science.gov (United States)

    Pinigin, Gennady; Protsyuk, Yuri; Shulga, Alexander

    The activity of scientific collaborative and co-operative research between South-Eastern European (SEE) observatories is enlarged in the last time. The creation of a common database as a regional virtual observatory is very desirable. The creation of astronomical information resource with a capability of interactive access to databases and telescopes on the base of the general astronomical database of the SEE countries is presented. This resource may be connected with the European network. A short description of the NAO database is presented. The total amount of the NAO information makes about 90 GB, the one obtained from other sources - about 15 GB. The mean diurnal level of the new astronomical information produced with the NAO CCD instruments makes from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The majority of observational data is stored in FITS format. Possibility of using of VO-table format for displaying these data in the Internet is studied. Activities on development and the further refinement of storage, exchange and data processing standards are conducted.

  12. Application of the newly developed Japanese adenosine normal database for adenosine stress myocardial scintigraphy.

    Science.gov (United States)

    Harata, Shingo; Isobe, Satoshi; Morishima, Itsuro; Suzuki, Susumu; Tsuboi, Hideyuki; Sone, Takahito; Ishii, Hideki; Murohara, Toyoaki

    2015-10-01

    The currently available Japanese normal database (NDB) in stress myocardial perfusion scintigraphy recommended by the Japanese Society of Nuclear Medicine (JSNM-NDB) is created based on the data from exercise tests. The newly developed adenosine normal database (ADS-NDB) remains to be validated for patients undergoing adenosine stress test. We tested whether the diagnostic accuracy of adenosine stress test is improved by the use of ADS-NDB (Kanazawa University). Of 233 consecutive patients undergoing (99m)Tc-MIBI adenosine stress test, 112 patients were tested. The stress/rest myocardial (99m)Tc-MIBI single-photon emission computed tomography (SPECT) images were analyzed by AutoQUANT 7.2 with both ADS-NDB and JSNM-NDB. The summed stress score (SSS) and summed difference score (SDS) were calculated. The agreements of the post-stress defect severity between ADS-NDB and JSNM-NDB were assessed using a weighted kappa statistic. In all patients, mean SSSs of all, right coronary artery (RCA), left anterior descending (LAD), and left circumflex (LCx) territories were significantly lower with ADS-NDB than those with JSNM-NDB. Mean SDSs in all, RCA, and LAD territories were significantly lower with ADS-NDB than those with JSNM-NDB. In 28 patients with significant coronary stenosis, the mean SSS in the RCA territory was significantly lower with ADS-NDB than that with JSNM-NDB. In 84 patients without ischemia, both mean SSSs and SDSs in all, RCA, LAD, and LCx territories were significantly lower with ADS-NDB than those with JSNM-NDB. Weighted kappa values of all patients, patients with significant stenosis, and patients without ischemia were 0.89, 0.83, and 0.92, respectively. Differences were observed between results from ADS-NDB and JSNM-NDB. The diagnostic accuracy of adenosine stress myocardial perfusion scintigraphy may be improved by reducing false-positive results.

  13. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis.

    Science.gov (United States)

    Pierson, Kawika; Hand, Michael L; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available.

  14. Using a Semi-Realistic Database to Support a Database Course

    Science.gov (United States)

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  15. Updated US and Canadian normalization factors for TRACI 2.1

    DEFF Research Database (Denmark)

    Ryberg, Morten; Vieira, Marisa D. M.; Zgola, Melissa

    2014-01-01

    When LCA practitioners perform LCAs, the interpretation of the results can be difficult without a reference point to benchmark the results. Hence, normalization factors are important for relating results to a common reference. The main purpose of this paper was to update the normalization factors...... for the US and US-Canadian regions. The normalization factors were used for highlighting the most contributing substances, thereby enabling practitioners to put more focus on important substances, when compiling the inventory, as well as providing them with normalization factors reflecting the actual...... situation. Normalization factors were calculated using characterization factors from the TRACI 2.1 LCIA model. The inventory was based on US databases on emissions of substances. The Canadian inventory was based on a previous inventory with 2005 as reference, in this inventory the most significant...

  16. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  17. Discrimination of Dendrobium officinale and Its Common Adulterants by Combination of Normal Light and Fluorescence Microscopy

    Directory of Open Access Journals (Sweden)

    Chu Chu

    2014-03-01

    Full Text Available The stems of Dendrobium officinale Kimura et Migo, named Tie-pi-shi-hu, is one of the most endangered and precious species in China. Because of its various pharmacodynamic effects, D. officinale is widely recognized as a high-quality health food in China and other countries in south and south-east Asia. With the rising interest of D. officinale, its products have a high price due to a limited supply. This high price has led to the proliferation of adulterants in the market. To ensure the safe use of D. officinale, a fast and convenient method combining normal and fluorescence microscopy was applied in the present study to distinguish D. officinale from three commonly used adulterants including Zi-pi-shi-hu (D. devonianum, Shui-cao-shi-hu (D. aphyllum, Guang-jie-shi-hu (D. gratiosissimum. The result demonstrated that D. officinale could be identified by the characteristic “two hat-shaped” vascular bundle sheath observed under the fluorescence microscopy and the distribution of raphides under normal light microscopy. The other three adulterants could be discriminated by the vascular bundle differences and the distribution of raphides under normal light microscopy. This work indicated that combination of normal light and fluorescence microscopy is a fast and efficient technique to scientifically distinguish D. officinale from the commonly confused species.

  18. Development of Web-Based Common Cause Failure (CCF) Database Module for Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun-Gyo; Hwang, Seok-Won; Shin, Tae-young [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2015-05-15

    Probabilistic safety assessment (PSA) has been used to identify risk vulnerabilities and derive the safety improvement measures from construction to operation stages of nuclear power plants. In addition, risk insights from PSA can be applied to improve the designs and operation requirements of plants. However, reliability analysis methods for quantitative PSA evaluation have essentially inherent uncertainties, and it may create a distorted risk profiles because of the differences among the PSA models, plant designs, and operation status. Therefore, it is important to ensure the quality of the PSA model so that analysts identify design vulnerabilities and utilize risk information. Especially, the common cause failure (CCF) has been pointed out as one of major issues to be able to cause the uncertainty related to the PSA analysis methods and data because CCF has a large influence on the PSA results. Organization for economic cooperation and development /nuclear energy agent (OECD/NEA) has implemented an international common cause failure data exchange (ICDE) project for the CCF quality assurance through the development of the detailed analysis methodologies and data sharing. However, Korea Hydro and Nuclear Power company (KHNP) does not have the basis for the data gathering and analysis for CCF analyses. In case of methodology, the Alpha Factor parameter estimation, which can analyze uncertainties and estimate an interface factor (Impact Vector) with an ease, is ready to be applied rather than the Multi Greek Letter (MGL) method. This article summarizes the development of the plant-specific CCF database (DB) module considering the raw data collection and the analysis procedure based on the CCF parameter calculation method of ICDE. Although the portion affected by CCF in the PSA model is quite a large, the development efforts of the tools to collect and analyze data were insufficient. Currently, KHNP intends to improve PSA quality and ensure CCF data reliability by

  19. Development of Web-Based Common Cause Failure (CCF) Database Module for Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Hyun-Gyo; Hwang, Seok-Won; Shin, Tae-young

    2015-01-01

    Probabilistic safety assessment (PSA) has been used to identify risk vulnerabilities and derive the safety improvement measures from construction to operation stages of nuclear power plants. In addition, risk insights from PSA can be applied to improve the designs and operation requirements of plants. However, reliability analysis methods for quantitative PSA evaluation have essentially inherent uncertainties, and it may create a distorted risk profiles because of the differences among the PSA models, plant designs, and operation status. Therefore, it is important to ensure the quality of the PSA model so that analysts identify design vulnerabilities and utilize risk information. Especially, the common cause failure (CCF) has been pointed out as one of major issues to be able to cause the uncertainty related to the PSA analysis methods and data because CCF has a large influence on the PSA results. Organization for economic cooperation and development /nuclear energy agent (OECD/NEA) has implemented an international common cause failure data exchange (ICDE) project for the CCF quality assurance through the development of the detailed analysis methodologies and data sharing. However, Korea Hydro and Nuclear Power company (KHNP) does not have the basis for the data gathering and analysis for CCF analyses. In case of methodology, the Alpha Factor parameter estimation, which can analyze uncertainties and estimate an interface factor (Impact Vector) with an ease, is ready to be applied rather than the Multi Greek Letter (MGL) method. This article summarizes the development of the plant-specific CCF database (DB) module considering the raw data collection and the analysis procedure based on the CCF parameter calculation method of ICDE. Although the portion affected by CCF in the PSA model is quite a large, the development efforts of the tools to collect and analyze data were insufficient. Currently, KHNP intends to improve PSA quality and ensure CCF data reliability by

  20. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  1. Reference Clinical Database for Fixation Stability Metrics in Normal Subjects Measured with the MAIA Microperimeter.

    Science.gov (United States)

    Morales, Marco U; Saker, Saker; Wilde, Craig; Pellizzari, Carlo; Pallikaris, Aristophanes; Notaroberto, Neil; Rubinstein, Martin; Rui, Chiara; Limoli, Paolo; Smolek, Michael K; Amoaku, Winfried M

    2016-11-01

    The purpose of this study was to establish a normal reference database for fixation stability measured with the bivariate contour ellipse area (BCEA) in the Macular Integrity Assessment (MAIA) microperimeter. Subjects were 358 healthy volunteers who had the MAIA examination. Fixation stability was assessed using two BCEA fixation indices (63% and 95% proportional values) and the percentage of fixation points within 1° and 2° from the fovea (P1 and P2). Statistical analysis was performed with linear regression and Pearson's product moment correlation coefficient. Average areas of 0.80 deg 2 (min = 0.03, max = 3.90, SD = 0.68) for the index BCEA@63% and 2.40 deg 2 (min = 0.20, max = 11.70, SD = 2.04) for the index BCEA@95% were found. The average values of P1 and P2 were 95% (min = 76, max = 100, SD = 5.31) and 99% (min = 91, max = 100, SD = 1.42), respectively. The Pearson's product moment test showed an almost perfect correlation index, r = 0.999, between BCEA@63% and BCEA@95%. Index P1 showed a very strong correlation with BCEA@63%, r = -0.924, as well as with BCEA@95%, r = -0.925. Index P2 demonstrated a slightly lower correlation with both BCEA@63% and BCEA@95%, r = -0.874 and -0.875, respectively. The single parameter of the BCEA@95% may be taken as accurately reporting fixation stability and serves as a reference database of normal subjects with a cutoff area of 2.40 ± 2.04 deg 2 in MAIA microperimeter. Fixation stability can be measured with different indices. This study originates reference fixation values for the MAIA using a single fixation index.

  2. Database for environmental monitoring in nuclear facilities

    International Nuclear Information System (INIS)

    Raceanu, Mircea; Varlam, Carmen; Iliescu, Mariana; Enache, Adrian; Faurescu, Ionut

    2006-01-01

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well before of nuclear facility commissioning. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of foods, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate the code structures used to access the databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing the corresponding measurable physical quantities. Developing a database system implies setting up well-established rules of how the data should be stored and accessed what is commonly called the Relational Database Theory. This consists of guidelines regarding issues as how to avoid duplicating data using the technique called normalization and how to identify the unique identifier for a database record. (authors)

  3. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  4. License - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database License to Use This Database Last updated : 2010/02/15 You may use this database...nal License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additional ...the Standard License. Standard License The Standard License for this database is the license specified in th...e Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  5. Iterative closest normal point for 3D face recognition.

    Science.gov (United States)

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.

  6. Comparison of SSS and SRS calculated from normal databases provided by QPS and 4D-MSPECT manufacturers and from identical institutional normals

    International Nuclear Information System (INIS)

    Knollmann, Daniela; Knebel, Ingrid; Gebhard, Michael; Krohn, Thomas; Buell, Ulrich; Schaefer, Wolfgang M.; Koch, Karl-Christian

    2008-01-01

    There is proven evidence for the importance of myocardial perfusion-single-photon emission computed tomography (SPECT) with computerised determination of summed stress and rest scores (SSS/SRS) for the diagnosis of coronary artery disease (CAD). SSS and SRS can thereby be calculated semi-quantitatively using a 20-segment model by comparing tracer-uptake with values from normal databases (NDB). Four severity-degrees for SSS and SRS are normally used: 99m Tc-tetrofosmin, triple-head-camera, 30 s/view, 20 views/head) from 36 men with a low post-stress test CAD probability and visually normal SPECT findings. Patient group was 60 men showing the entire CAD-spectrum referred for routine perfusion-SPECT. Stress/rest results of automatic quantification of the 60 patients were compared to M-NDB and I-NDB. After reclassifying SSS/SRS into the four severity degrees, kappa (κ) values were calculated to objectify agreement. Mean values (vs M-NDB) were 9.4 ± 10.3 (SSS) and 5.8 ± 9.7 (SRS) for QPS and 8.2 ± 8.7 (SSS) and 6.2 ± 7.8 (SRS) for 4D-MSPECT. Thirty seven of sixty SSS classifications (κ = 0.462) and 40/60 SRS classifications (κ = 0.457) agreed. Compared to I-NDB, mean values were 10.2 ± 11.6 (SSS) and 6.5 ± 10.4 (SRS) for QPS and 9.2 ± 9.3 (SSS) and 7.2 ± 8.6 (SRS) for 4D-MSPECT. Forty four of sixty patients agreed in SSS and SRS (κ = 0.621 resp. 0.58). Considerable differences between SSS/SRS obtained with QPS and 4D-MSPECT were found when using M-NDB. Even using identical patients and identical I-NDB, the algorithms still gave substantial different results. (orig.)

  7. What is common becomes normal: the effect of obesity prevalence on maternal perception.

    Science.gov (United States)

    Binkin, N; Spinelli, A; Baglio, G; Lamberti, A

    2013-05-01

    This analysis investigates the poorly-known effect of local prevalence of childhood obesity on mothers' perception of their children's weight status. In 2008, a national nutritional survey of children attending the third grade of elementary school was conducted in Italy. Children were measured and classified as underweight, normal weight, overweight and obese, using the International Obesity Task Force cut-offs for body mass index (BMI). A parental questionnaire included parental perception of their child's weight status (underweight, normal, a little overweight and a lot overweight). Regions were classified by childhood obesity prevalence (maternal perception and regional obesity prevalence, and maternal and child characteristics were examined using bivariate and logistic regression analyses. Complete data were available for 37 590 children, of whom 24% were overweight and 12% obese. Mothers correctly identified the status of 84% of normal weight, 52% of overweight and 14% of obese children. Among overweight children, factors associated with underestimation of the child's weight included lower maternal education (adjusted odds ratio, aOR, 1.9; 95% confidence interval (CI) 1.6-2.4), residence in a high-obesity region (aOR 2.2; 95% CI 1.9-2.6), male gender (aOR 1.4; 95% CI 1.2-1.6) and child's BMI. Higher regional obesity prevalence is associated with lower maternal perception, suggesting that what is common has a greater likelihood of being perceived as normal. As perception is a first step to change, it may be harder to intervene in areas with high-obesity prevalence where intervention is most urgent. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Artist Material BRDF Database for Computer Graphics Rendering

    Science.gov (United States)

    Ashbaugh, Justin C.

    The primary goal of this thesis was to create a physical library of artist material samples. This collection provides necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set was produced consisting of 25 panels and containing nearly 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field. For each sample material, image data is collected and used to measure an average bi-directional reflectance distribution function (BRDF). The results are available as a public-domain image and optical database of artist materials at art-si.org. Additionally, the database includes specifications for each sample along with other information useful for computer graphics rendering such as the rectified sample images and normal maps.

  9. Development of Databases on Iodine in Foods and Dietary Supplements

    Science.gov (United States)

    Ershow, Abby G.; Skeaff, Sheila A.; Merkel, Joyce M.; Pehrsson, Pamela R.

    2018-01-01

    Iodine is an essential micronutrient required for normal growth and neurodevelopment; thus, an adequate intake of iodine is particularly important for pregnant and lactating women, and throughout childhood. Low levels of iodine in the soil and groundwater are common in many parts of the world, often leading to diets that are low in iodine. Widespread salt iodization has eradicated severe iodine deficiency, but mild-to-moderate deficiency is still prevalent even in many developed countries. To understand patterns of iodine intake and to develop strategies for improving intake, it is important to characterize all sources of dietary iodine, and national databases on the iodine content of major dietary contributors (including foods, beverages, water, salts, and supplements) provide a key information resource. This paper discusses the importance of well-constructed databases on the iodine content of foods, beverages, and dietary supplements; the availability of iodine databases worldwide; and factors related to variability in iodine content that should be considered when developing such databases. We also describe current efforts in iodine database development in the United States, the use of iodine composition data to develop food fortification policies in New Zealand, and how iodine content databases might be used when considering the iodine intake and status of individuals and populations. PMID:29342090

  10. Endoscopic findings in patients presenting with dysphagia: analysis of a national endoscopy database.

    Science.gov (United States)

    Krishnamurthy, Chaya; Hilden, Kristen; Peterson, Kathryn A; Mattek, Nora; Adler, Douglas G; Fang, John C

    2012-03-01

    Dysphagia is a common problem and an indication for upper endoscopy. There is no data on the frequency of the different endoscopic findings and whether they change according to demographics or by single versus repeat endoscopy. To determine the prevalence of endoscopic findings in patients with dysphagia and whether findings differ in regard to age, gender, ethnicity, and repeat procedure. This was a retrospective study using a national endoscopic database (CORI). A total of 30,377 patients underwent esophagogastroduodenoscopy (EGD) for dysphagia of which 4,202 patients were repeat endoscopies. Overall frequency of endoscopic findings was determined by gender, age, ethnicity, and single vs. repeat procedures. Esophageal stricture was the most common finding followed by normal, esophagitis/ulcer (EU), Schatzki ring (SR), esophageal food impaction (EFI), and suspected malignancy. Males were more likely to undergo repeat endoscopies and more likely to have stricture, EU, EFI, and suspected malignancy (P = 0.001). Patients 60 years or older had a higher prevalence of stricture, EU, SR, and suspected malignancy (P findings differs significantly by gender, age, and repeat procedure. The most common findings in descending order were stricture, normal, EU, SR, EFI, and suspected malignancy. For patients undergoing a repeat procedure, normal and EU were less common and all other abnormal findings were significantly more common.

  11. Comparison of SSS and SRS calculated from normal databases provided by QPS and 4D-MSPECT manufacturers and from identical institutional normals.

    Science.gov (United States)

    Knollmann, Daniela; Knebel, Ingrid; Koch, Karl-Christian; Gebhard, Michael; Krohn, Thomas; Buell, Ulrich; Schaefer, Wolfgang M

    2008-02-01

    There is proven evidence for the importance of myocardial perfusion-single-photon emission computed tomography (SPECT) with computerised determination of summed stress and rest scores (SSS/SRS) for the diagnosis of coronary artery disease (CAD). SSS and SRS can thereby be calculated semi-quantitatively using a 20-segment model by comparing tracer-uptake with values from normal databases (NDB). Four severity-degrees for SSS and SRS are normally used: or =14. Manufacturers' NDBs (M-NDBs) often do not fit the institutional (I) settings. Therefore, this study compared SSS and SRS obtained with the algorithms Quantitative Perfusion SPECT (QPS) and 4D-MSPECT using M-NDB and I-NDB. I-NDBs were obtained using QPS and 4D-MSPECT from exercise stress data (450 MBq (99m)Tc-tetrofosmin, triple-head-camera, 30 s/view, 20 views/head) from 36 men with a low post-stress test CAD probability and visually normal SPECT findings. Patient group was 60 men showing the entire CAD-spectrum referred for routine perfusion-SPECT. Stress/rest results of automatic quantification of the 60 patients were compared to M-NDB and I-NDB. After reclassifying SSS/SRS into the four severity degrees, kappa values were calculated to objectify agreement. Mean values (vs M-NDB) were 9.4 +/- 10.3 (SSS) and 5.8 +/- 9.7 (SRS) for QPS and 8.2 +/- 8.7 (SSS) and 6.2 +/- 7.8 (SRS) for 4D-MSPECT. Thirty seven of sixty SSS classifications (kappa = 0.462) and 40/60 SRS classifications (kappa = 0.457) agreed. Compared to I-NDB, mean values were 10.2 +/- 11.6 (SSS) and 6.5 +/- 10.4 (SRS) for QPS and 9.2 +/- 9.3 (SSS) and 7.2 +/- 8.6 (SRS) for 4D-MSPECT. Forty four of sixty patients agreed in SSS and SRS (kappa = 0.621 resp. 0.58). Considerable differences between SSS/SRS obtained with QPS and 4D-MSPECT were found when using M-NDB. Even using identical patients and identical I-NDB, the algorithms still gave substantial different results.

  12. International scientific seminar «Chronicle of Nature – a common database for scientific analysis and joint planning of scientific publications»

    Directory of Open Access Journals (Sweden)

    Juri P. Kurhinen

    2016-05-01

    Full Text Available Provides information about the results of the international scienti fic seminar «Сhronicle of Nature – a common database for scientific analysis and joint planning of scientific publications», held at Findland-Russian project «Linking environmental change to biodiversity change: large scale analysis оf Eurasia ecosystem».

  13. License - Metabolonote | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Metabolonote License License to Use This Database Last updated : 2016/06/22 You may use this database...the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribut...ion-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...e Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  14. MICA: desktop software for comprehensive searching of DNA databases

    Directory of Open Access Journals (Sweden)

    Glick Benjamin S

    2006-10-01

    Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.

  15. Evaluation of Bias-Variance Trade-Off for Commonly Used Post-Summarizing Normalization Procedures in Large-Scale Gene Expression Studies

    Science.gov (United States)

    Qiu, Xing; Hu, Rui; Wu, Zhixin

    2014-01-01

    Normalization procedures are widely used in high-throughput genomic data analyses to remove various technological noise and variations. They are known to have profound impact to the subsequent gene differential expression analysis. Although there has been some research in evaluating different normalization procedures, few attempts have been made to systematically evaluate the gene detection performances of normalization procedures from the bias-variance trade-off point of view, especially with strong gene differentiation effects and large sample size. In this paper, we conduct a thorough study to evaluate the effects of normalization procedures combined with several commonly used statistical tests and MTPs under different configurations of effect size and sample size. We conduct theoretical evaluation based on a random effect model, as well as simulation and biological data analyses to verify the results. Based on our findings, we provide some practical guidance for selecting a suitable normalization procedure under different scenarios. PMID:24941114

  16. Functional integration of automated system databases by means of artificial intelligence

    Science.gov (United States)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  17. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  18. License - Taxonomy Icon | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Taxonomy Icon License License to Use This Database Last updated : 2013/3/21 You may use this database...the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribut...ion 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Taxon...ative Commons Attribution 2.1 Japan is found here . With regard to this database,

  19. [Markers of antimicrobial drug resistance in the most common bacteria of normal facultative anaerobic intestinal flora].

    Science.gov (United States)

    Plavsić, Teodora

    2011-01-01

    Bacteria of normal intestinal flora are frequent carriers of markers of antimicrobial drug resistance. Resistance genes may be exchanged with other bacteria of normal flora as well as with pathogenic bacteria. The increase in the number of markers of resistance is one of the major global health problems, which induces the emergence of multi-resistant strains. The aim of this study is to confirm the presence of markers of resistance in bacteria of normal facultative anaerobic intestinal flora in our region. The experiment included a hundred fecal specimens obtained from a hundred healthy donors. A hundred bacterial strains were isolated (the most numerous representatives of the normal facultative-anaerobic intestinal flora) by standard bacteriological methods. The bacteria were cultivated on Endo agar and SS agar for 24 hours at 37 degrees C. Having been incubated, the selected characteristic colonies were submitted to the biochemical analysis. The susceptibility to antimicrobial drugs was tested by standard disc diffusion method, and the results were interpreted according to the Standard of Clinical and Laboratory Standards Institute 2010. The marker of resistance were found in 42% of the isolated bacteria. The resistance was the most common to ampicillin (42% of isolates), amoxicillin with clavulanic acid (14% of isolates), cephalexin (14%) and cotrimoxazole (8%). The finding of 12 multiresistant strains (12% of isolates) and resistance to ciprofloxacin were significant. The frequency of resistance markers was statistically higher in Klebsiella pneumoniae compared to Escherichia coli of normal flora. The finding of a large number of markers of antimicrobial drug resistance among bacteria of normal intestinal flora shows that it is necessary to begin with systematic monitoring of their antimicrobial resistance because it is an indicator of resistance in the population.

  20. License - TMBETA-GENOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us TMBETA-GENOME License License to Use This Database Last updated : 2015/03/09 You may use this database... the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribu...tion-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as f....1 Japan . The summary of the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database

  1. License - RGP physicalmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP physicalmap License License to Use This Database Last updated : 2015/07/07 You may use this database...es the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attri...bution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as...ry of the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database, y

  2. Cooperative Shark Mark Recapture Database (MRDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Shark Mark Recapture Database is a Cooperative Research Program database system used to keep multispecies mark-recapture information in a common format for...

  3. Relational Database Design in Information Science Education.

    Science.gov (United States)

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  4. A comprehensive two-dimensional gel protein database of noncultured unfractionated normal human epidermal keratinocytes: towards an integrated approach to the study of cell proliferation, differentiation and skin diseases

    DEFF Research Database (Denmark)

    Celis, J E; Madsen, Peder; Rasmussen, H H

    1991-01-01

    A two-dimensional (2-D) gel database of cellular proteins from noncultured, unfractionated normal human epidermal keratinocytes has been established. A total of 2651 [35S]methionine-labeled cellular proteins (1868 isoelectric focusing, 783 nonequilibrium pH gradient electrophoresis) were resolved...

  5. The Eruption Forecasting Information System (EFIS) database project

    Science.gov (United States)

    Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather

    2016-04-01

    The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.

  6. License - MicrobeDB.jp | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us MicrobeDB.jp License License to Use This Database Last updated : 2017/06/29 You may use this database...the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribut...ion-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database... Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, yo

  7. License - NBDC NikkajiRDF | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us NBDC NikkajiRDF License License to Use This Database Last updated : 2015/05/29 You may use this database...es the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attri...bution 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: NB... the Creative Commons Attribution 2.1 Japan is found here . With regard to this database

  8. License - RED II INAHO | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RED II INAHO License License to Use This Database Last updated : 2016/01/14 You may use this database...the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribut...ion-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  9. License - Togo Picture Gallery | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Togo Picture Gallery License License to Use This Database Last updated : 2017/05/16 You may use this database...ecifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons ...Attribution 4.0 International . If you use data from this database, please be sure attribute this database a...e Creative Commons Attribution 4.0 International is found here . With regard to this database

  10. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  11. License - RGP estmap2001 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP estmap2001 License License to Use This Database Last updated : 2015/04/02 You may use this database...s the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attrib...ution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as ... the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database, you ar

  12. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  13. License - RGP gmap2000 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP gmap2000 License License to Use This Database Last updated : 2015/03/10 You may use this database...the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribut...ion-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as fo...y of the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database, yo

  14. Mean blood velocities and flow impedance in the fetal descending thoracic aortic and common carotid artery in normal pregnancy.

    Science.gov (United States)

    Bilardo, C M; Campbell, S; Nicolaides, K H

    1988-12-01

    A linear array pulsed Doppler duplex scanner was used to establish reference ranges for mean blood velocities and flow impedance (Pulsatility Index = PI) in the descending thoracic aorta and in the common carotid artery from 70 fetuses in normal pregnancies at 17-42 weeks' gestation. The aortic velocity increased with gestation up to 32 weeks, then remained constant until term, when it decreased. In contrast, the velocity in the common carotid artery increased throughout pregnancy. The PI in the aorta remained constant throughout pregnancy, while in the common carotid artery it fell steeply after 32 weeks. These results suggest that with advancing gestation there is a redistribution of the fetal circulation with decreased impedance to flow to the fetal brain, presumably to compensate for the progressive decrease in fetal blood PO2.

  15. The Danish Nonmelanoma Skin Cancer Dermatology Database

    DEFF Research Database (Denmark)

    Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke

    2016-01-01

    AIM OF DATABASE: The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents...... treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance....

  16. dbEM: A database of epigenetic modifiers curated from cancerous and normal genomes

    Science.gov (United States)

    Singh Nanda, Jagpreet; Kumar, Rahul; Raghava, Gajendra P. S.

    2016-01-01

    We have developed a database called dbEM (database of Epigenetic Modifiers) to maintain the genomic information of about 167 epigenetic modifiers/proteins, which are considered as potential cancer targets. In dbEM, modifiers are classified on functional basis and comprise of 48 histone methyl transferases, 33 chromatin remodelers and 31 histone demethylases. dbEM maintains the genomic information like mutations, copy number variation and gene expression in thousands of tumor samples, cancer cell lines and healthy samples. This information is obtained from public resources viz. COSMIC, CCLE and 1000-genome project. Gene essentiality data retrieved from COLT database further highlights the importance of various epigenetic proteins for cancer survival. We have also reported the sequence profiles, tertiary structures and post-translational modifications of these epigenetic proteins in cancer. It also contains information of 54 drug molecules against different epigenetic proteins. A wide range of tools have been integrated in dbEM e.g. Search, BLAST, Alignment and Profile based prediction. In our analysis, we found that epigenetic proteins DNMT3A, HDAC2, KDM6A, and TET2 are highly mutated in variety of cancers. We are confident that dbEM will be very useful in cancer research particularly in the field of epigenetic proteins based cancer therapeutics. This database is available for public at URL: http://crdd.osdd.net/raghava/dbem.

  17. An Adaptive Database Intrusion Detection System

    Science.gov (United States)

    Barrios, Rita M.

    2011-01-01

    Intrusion detection is difficult to accomplish when attempting to employ current methodologies when considering the database and the authorized entity. It is a common understanding that current methodologies focus on the network architecture rather than the database, which is not an adequate solution when considering the insider threat. Recent…

  18. License - Gene Name Thesaurus | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gene Name Thesaurus License to Use This Database Last updated : 2012/01/17 The license for this database... is the license specified in the Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database..., please be sure attribute this database as follows: Gene Name Thesaurus, © The Data...he summary of the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database..., you are licensed to: freely access part or whole of this database, and acquire data; freely redistrib

  19. Simple Logic for Big Problems: An Inside Look at Relational Databases.

    Science.gov (United States)

    Seba, Douglas B.; Smith, Pat

    1982-01-01

    Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…

  20. GenColors-based comparative genome databases for small eukaryotic genomes.

    Science.gov (United States)

    Felder, Marius; Romualdi, Alessandro; Petzold, Andreas; Platzer, Matthias; Sühnel, Jürgen; Glöckner, Gernot

    2013-01-01

    Many sequence data repositories can give a quick and easily accessible overview on genomes and their annotations. Less widespread is the possibility to compare related genomes with each other in a common database environment. We have previously described the GenColors database system (http://gencolors.fli-leibniz.de) and its applications to a number of bacterial genomes such as Borrelia, Legionella, Leptospira and Treponema. This system has an emphasis on genome comparison. It combines data from related genomes and provides the user with an extensive set of visualization and analysis tools. Eukaryote genomes are normally larger than prokaryote genomes and thus pose additional challenges for such a system. We have, therefore, adapted GenColors to also handle larger datasets of small eukaryotic genomes and to display eukaryotic gene structures. Further recent developments include whole genome views, genome list options and, for bacterial genome browsers, the display of horizontal gene transfer predictions. Two new GenColors-based databases for two fungal species (http://fgb.fli-leibniz.de) and for four social amoebas (http://sacgb.fli-leibniz.de) were set up. Both new resources open up a single entry point for related genomes for the amoebozoa and fungal research communities and other interested users. Comparative genomics approaches are greatly facilitated by these resources.

  1. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  2. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  3. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Repetitive Bibliographical Information in Relational Databases.

    Science.gov (United States)

    Brooks, Terrence A.

    1988-01-01

    Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)

  5. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  6. License - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us TMFunction License License to Use This Database Last updated : 2017/01/23 You may use this database...e license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attributio...n-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ion-Share Alike 4.0 International is found here . With regard to this database, you are licensed to: freely

  7. Bernstein Algorithm for Vertical Normalization to 3NF Using Synthesis

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2013-07-01

    Full Text Available This paper demonstrates the use of Bernstein algorithm for vertical normalization to 3NF using synthesis. The aim of the paper is to provide an algorithm for database normalization and present a set of steps which minimize redundancy in order to increase the database management efficiency, and specify tests and algorithms for testing and proving the reversibility (i.e., proving that the normalization did not cause loss of information. Using Bernstein algorithm steps, the paper gives examples of vertical normalization to 3NF through synthesis and proposes a test and an algorithm to demonstrate decomposition reversibility. This paper also sets out to explain that the reasons for generating normal forms are to facilitate data search, eliminate data redundancy as well as delete, insert and update anomalies and explain how anomalies develop using examples-

  8. Normalization in EDIP97 and EDIP2003: updated European inventory for 2004 and guidance towards a consistent use in practice

    DEFF Research Database (Denmark)

    Laurent, Alexis; Olsen, Stig Irving; Hauschild, Michael Zwicky

    2011-01-01

    Purpose: When performing a life cycle assessment (LCA), the LCA practitioner faces the need to express the characterized results in a form suitable for the final interpretation. This can be done using normalization against some common reference impact—the normalization references—which require...... regular updates. The study presents updated sets of normalization inventories, normalization references for the EDIP97/EDIP2003 methodology and guidance on their consistent use in practice. Materials and methods: The base year of the inventory is 2004; the geographical scope for the non-global impacts...... is limited to Europe. The emission inventory was collected from different publicly available databases and monitoring bodies. Where necessary, gaps were filled using extrapolations. A new approach for inventorizing specific groups of substances—non-methane volatile organic compounds and pesticides—was also...

  9. Conversion of National Health Insurance Service-National Sample Cohort (NHIS-NSC) Database into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM).

    Science.gov (United States)

    You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.

  10. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  11. Database Concepts in a Domain Ontology

    Directory of Open Access Journals (Sweden)

    Gorskis Henrihs

    2017-12-01

    Full Text Available There are multiple approaches for mapping from a domain ontology to a database in the task of ontology-based data access. For that purpose, external mapping documents are most commonly used. These documents describe how the data necessary for the description of ontology individuals and other values, are to be obtained from the database. The present paper investigates the use of special database concepts. These concepts are not separated from the domain ontology; they are mixed with domain concepts to form a combined application ontology. By creating natural relationships between database concepts and domain concepts, mapping can be implemented more easily and with a specific purpose. The paper also investigates how the use of such database concepts in addition to domain concepts impacts ontology building and data retrieval.

  12. Academic Journal Embargoes and Full Text Databases.

    Science.gov (United States)

    Brooks, Sam

    2003-01-01

    Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…

  13. License - CREATE portal | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us CREATE portal License License to Use This Database Last updated : 2016/03/16 You may use this database... the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attribu...tion-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...hare Alike 4.0 International is found here . With regard to this database, you are licensed to: freely acces

  14. Large-scale event extraction from literature with multi-level gene normalization.

    Directory of Open Access Journals (Sweden)

    Sofie Van Landeghem

    Full Text Available Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/. Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from

  15. License - Society Catalog | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Society Catalog License License to Use This Database Last updated : 2012/01/17 You may use this database...es the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attri...bution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as...is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  16. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  17. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    Science.gov (United States)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  18. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    deploying the VM, we installed SQL Server 2014 relational database management software (RDBMS) and restored a copy of the PYTHON database onto the server ...management views within SQL Server , we retrieved lists of the most commonly executed queries, the percentage of reads versus writes, as well as...Monitor. This gave us data regarding resource utilization and queueing. The second tool we used was the SQL Server Profiler provided by Microsoft

  19. Connection of European particle therapy centers and generation of a common particle database system within the European ULICE-framework

    International Nuclear Information System (INIS)

    Kessel, Kerstin A; Pötter, Richard; Dosanjh, Manjit; Debus, Jürgen; Combs, Stephanie E; Bougatf, Nina; Bohn, Christian; Habermehl, Daniel; Oetzel, Dieter; Bendl, Rolf; Engelmann, Uwe; Orecchia, Roberto; Fossati, Piero

    2012-01-01

    To establish a common database on particle therapy for the evaluation of clinical studies integrating a large variety of voluminous datasets, different documentation styles, and various information systems, especially in the field of radiation oncology. We developed a web-based documentation system for transnational and multicenter clinical studies in particle therapy. 560 patients have been treated from November 2009 to September 2011. Protons, carbon ions or a combination of both, as well as a combination with photons were applied. To date, 12 studies have been initiated and more are in preparation. It is possible to immediately access all patient information and exchange, store, process, and visualize text data, any DICOM images and multimedia data. Accessing the system and submitting clinical data is possible for internal and external users. Integrated into the hospital environment, data is imported both manually and automatically. Security and privacy protection as well as data validation and verification are ensured. Studies can be designed to fit individual needs. The described database provides a basis for documentation of large patient groups with specific and specialized questions to be answered. Having recently begun electronic documentation, it has become apparent that the benefits lie in the user-friendly and timely workflow for documentation. The ultimate goal is a simplification of research work, better study analyses quality and eventually, the improvement of treatment concepts by evaluating the effectiveness of particle therapy

  20. License - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us AcEST License to Use This Database Last updated : 2010/02/15 You may use this database in co...he Standard License specifies the license terms regarding the use of this database and the requirements you ...must follow in using this database. The Additional License specifies those items ... License The Standard License for this database is the license specified in the Creative Commons Attribution...-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database

  1. License - DB-SPIRE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DB-SPIRE License License to Use This Database Last updated : 2017/02/16 You may use this database...license terms regarding the use of this database and the requirements you must follow in using this database...Share Alike 4.0 International . If you use data from this database, please be sure attribute this database a...eative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you a...re licensed to: freely access part or whole of this database, and acquire data; freely redistribute part or

  2. Understanding a Normal Distribution of Data.

    Science.gov (United States)

    Maltenfort, Mitchell G

    2015-12-01

    Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?

  3. License - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server License License to Use This Database Last updated : 2009/09/14 You may use this database...scribed below. The Standard License specifies the license terms regarding the use of this database and the r...equirements you must follow in using this database. The Additional License specif...icense. Standard License The Standard License for this database is the license specified in the Creative Com...mons Attribution-Share Alike 2.1 Japan . If you use data from this database, plea

  4. Identification of common cause initiators in IRS database

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Kulig, M.; Tomic, B. [ENCONET Consulting GmbH, Vienna (Austria)

    1998-02-01

    The objective of this project is to obtain practical insights relevant for the identification of Common Cause Initiators (CCIs) based on event data available in the NEA Incident Reporting System. The project is intended to improve the understanding of CCIs and, in consequence, their consideration in safety assessment of nuclear power plants and in particular plant specific probabilistic safety assessment. The project is a pilot study, and not expected to provide answers for all related questions. Its scope is limited to some practical insights that would help to improve the understanding of the issue and to establish directions for further work.

  5. Identification of common cause initiators in IRS database

    International Nuclear Information System (INIS)

    Nyman, R.; Kulig, M.; Tomic, B.

    1998-02-01

    The objective of this project is to obtain practical insights relevant for the identification of Common Cause Initiators (CCIs) based on event data available in the NEA Incident Reporting System. The project is intended to improve the understanding of CCIs and, in consequence, their consideration in safety assessment of nuclear power plants and in particular plant specific probabilistic safety assessment. The project is a pilot study, and not expected to provide answers for all related questions. Its scope is limited to some practical insights that would help to improve the understanding of the issue and to establish directions for further work

  6. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  7. Normal Function of the Colon and Anorectal Area

    Science.gov (United States)

    ... What is Constipation Introduction: What is Constipation? Normal Function Common Questions & Mistaken Beliefs Signs & Symptoms Symptoms Overview ... What is Constipation Introduction: What is Constipation? Normal Function Common Questions & Mistaken Beliefs Signs & Symptoms Symptoms Overview ...

  8. The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases

    Science.gov (United States)

    Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning

    2014-01-01

    IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451

  9. Serum killing of Ureaplasma parvum shows serovar-determined susceptibility for normal individuals and common variable immuno-deficiency patients.

    Science.gov (United States)

    Beeton, Michael L; Daha, Mohamed R; El-Shanawany, Tariq; Jolles, Stephen R; Kotecha, Sailesh; Spiller, O Brad

    2012-02-01

    Many Gram-negative bacteria, unlike Gram-positive, are directly lysed by complement. Ureaplasma can cause septic arthritis and meningitis in immunocompromised individuals and induce premature birth. Ureaplasma has no cell wall, cannot be Gram-stain classified and its serum susceptibility is unknown. Survival of Ureaplasma serovars (SV) 1, 3, 6 and 14 (collectively Ureaplasma parvum) were measured following incubation with normal or immunoglobulin-deficient patient serum (relative to heat-inactivated controls). Blocking monoclonal anti-C1q antibody and depletion of calcium, immunoglobulins, or lectins were used to determine the complement pathway responsible for killing. Eighty-three percent of normal sera killed SV1, 67% killed SV6 and 25% killed SV14; greater killing correlating to strong immunoblot identification of anti-Ureaplasma antibodies; killing was abrogated following ProteinA removal of IgG1. All normal sera killed SV3 in a C1q-dependent fashion, irrespective of immunoblot identification of anti-Ureaplasma antibodies; SV3 killing was unaffected by total IgG removal by ProteinG, where complement activity was retained. Only one of four common variable immunodeficient (CVID) patient sera failed to kill SV3, despite profound IgM and IgG deficiency for all; however, killing of SV3 and SV1 was restored with therapeutic intravenous immunoglobulin therapy. Only the classical complement pathway mediated Ureaplasma-cidal activity, sometimes in the absence of observable immunoblot reactive bands. Copyright © 2011 Elsevier GmbH. All rights reserved.

  10. License - PGDBj - Ortholog DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj - Ortholog DB License License to Use This Database Last updated : 2017/03/07 You may use this database...cifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons A...ttribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...hare Alike 4.0 International is found here . With regard to this database, you are licensed to: freely acces

  11. RANCANGAN DATABASE SUBSISTEM PRODUKSI DENGAN PENDEKATAN SEMANTIC OBJECT MODEL

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available To compete in the global market, business performer who active in industry fields should have and get information quickly and accurately, so they could make the precise decision. Traditional cost accounting system cannot give sufficient information, so many industries shift to Activity-Based Costing system (ABC. ABC system is more complex and need more data that should be save and process, so it should be applied information technology and database than traditional cost accounting system. The development of the software technology recently makes the construction of application program is not problem again. The primary problem is how to design database that presented information quickly and accurately. For that reason it necessary to make the model first. This paper discusses database modelling with semantic object model approach. This model is easier to use and is generate more normal database design than entity relationship model approach. Abstract in Bahasa Indonesia : Dalam persaingan di pasar bebas, para pelaku bisnis di bidang industri dalam membuat suatu keputusan yang tepat memerlukan informasi secara cepat dan akurat. Sistem akuntansi biaya tradisional tidak dapat menyediakan informasi yang memadai, sehingga banyak perusahaan industri yang beralih ke sistem Activity-Based Costing (ABC. Tetapi, sistem ABC merupakan sistem yang kompleks dan memerlukan banyak data yang harus disimpan dan diolah, sehingga harus menggunakan teknologi informasi dan database. Kemajuan di bidang perangkat lunak mengakibatkan pembuatan aplikasi program bukan masalah lagi. Permasalahan utama adalah bagaimana merancang database, agar dapat menyajikan informasi secara cepat dan akurat. Untuk itu, dalam makalah ini dibahas pemodelan database dengan pendekatan semantic object model. Model data ini lebih mudah digunakan dan menghasilkan transformasi yang lebih normal, jika dibandingkan dengan entity relationship model yang umum digunakan. Kata kunci: Sub Sistem

  12. License - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ConfC License License to Use This Database Last updated : 2016/09/20 You may use this database...ense terms regarding the use of this database and the requirements you must follow in using this database. The license for this datab...re Alike 4.0 International . If you use data from this database, please be sure attribute this database as f... Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you are lic...ensed to: freely access part or whole of this database, and acquire data; freely redistribute part or whole of the data from this dat

  13. A digital database of wrist bone anatomy and carpal kinematics.

    Science.gov (United States)

    Moore, Douglas C; Crisco, Joseph J; Trafton, Theodore G; Leventhal, Evan L

    2007-01-01

    The skeletal wrist consists of eight small, intricately shaped carpal bones. The motion of these bones is complex, occurs in three dimensions, and remains incompletely defined. Our previous efforts have been focused on determining the in vivo three-dimensional (3-D) kinematics of the normal and abnormal carpus. In so doing we have developed an extensive database of carpal bone anatomy and kinematics from a large number of healthy subjects. The purpose of this paper is to describe that database and to make it available to other researchers. CT volume images of both wrists from 30 healthy volunteers (15 males and 15 females) were acquired in multiple wrist positions throughout the normal range of wrist motion. The outer cortical surfaces of the carpal bones, radius and ulna, and proximal metacarpals were segmented and the 3-D motion of each bone was calculated for each wrist position. The database was constructed to include high-resolution surface models, measures of bone volume and shape, and the 3-D kinematics of each segmented bone. The database does not include soft tissues of the wrist. While there are numerous digital anatomical databases, this one is unique in that it includes a large number of subjects and it contains in vivo kinematic data as well as the bony anatomy.

  14. License - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ChIP-Atlas License License to Use This Database Last updated : 2016/06/24 You may use this database...e license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attributio...n-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...national is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  15. pGenN, a Gene Normalization Tool for Plant Genes and Proteins in Scientific Literature

    Science.gov (United States)

    Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.

    2015-01-01

    Background Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. Methods In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. Results We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9% (Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource.org/iprolink/). PMID:26258475

  16. Connection of European particle therapy centers and generation of a common particle database system within the European ULICE-framework

    Directory of Open Access Journals (Sweden)

    Kessel Kerstin A

    2012-07-01

    Full Text Available Abstract Background To establish a common database on particle therapy for the evaluation of clinical studies integrating a large variety of voluminous datasets, different documentation styles, and various information systems, especially in the field of radiation oncology. Methods We developed a web-based documentation system for transnational and multicenter clinical studies in particle therapy. 560 patients have been treated from November 2009 to September 2011. Protons, carbon ions or a combination of both, as well as a combination with photons were applied. To date, 12 studies have been initiated and more are in preparation. Results It is possible to immediately access all patient information and exchange, store, process, and visualize text data, any DICOM images and multimedia data. Accessing the system and submitting clinical data is possible for internal and external users. Integrated into the hospital environment, data is imported both manually and automatically. Security and privacy protection as well as data validation and verification are ensured. Studies can be designed to fit individual needs. Conclusions The described database provides a basis for documentation of large patient groups with specific and specialized questions to be answered. Having recently begun electronic documentation, it has become apparent that the benefits lie in the user-friendly and timely workflow for documentation. The ultimate goal is a simplification of research work, better study analyses quality and eventually, the improvement of treatment concepts by evaluating the effectiveness of particle therapy.

  17. License - Dicty_cDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Dicty_cDB License to Use This Database Last updated : 2010/02/15 You may use this database i...w. The Standard License specifies the license terms regarding the use of this database and the requirements ...you must follow in using this database. The Additional License specifies those it...dard License The Standard License for this database is the license specified in the Creative Commons Attribu...tion-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database

  18. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  19. License - RGP gmap98 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP gmap98 License License to Use This Database Last updated : 2015/02/12 You may use this database...e license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attributio...n-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as foll...on-Share Alike 2.1 Japan is found here . With regard to this database, you are licensed to: freely access part or whole of this datab

  20. License - D-HaploDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us D-HaploDB License License to Use This Database Last updated : 2011/08/25 You may use this database... license terms regarding the use of this database and the requirements you must follow in using this database...-Share Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as follow...ry of the Creative Commons Attribution-Share Alike 2.1 Japan is found here . With regard to this database..., you are licensed to: freely access part or whole of this database, and acquire dat

  1. A Relational Algebra Query Language for Programming Relational Databases

    Science.gov (United States)

    McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole

    2011-01-01

    In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…

  2. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  3. Update of the androgen receptor gene mutations database.

    Science.gov (United States)

    Gottlieb, B; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1999-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 309 to 374 during the past year. We have expanded the database by adding information on AR-interacting proteins; and we have improved the database by identifying those mutation entries that have been updated. Mutations of unknown significance have now been reported in both the 5' and 3' untranslated regions of the AR gene, and in individuals who are somatic mosaics constitutionally. In addition, single nucleotide polymorphisms, including silent mutations, have been discovered in normal individuals and in individuals with male infertility. A mutation hotspot associated with prostatic cancer has been identified in exon 5. The database is available on the internet (http://www.mcgill.ca/androgendb/), from EMBL-European Bioinformatics Institute (ftp.ebi.ac.uk/pub/databases/androgen), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca). Copyright 1999 Wiley-Liss, Inc.

  4. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  5. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  6. License - Open TG-GATEs | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs License License to Use This Database Last updated : 2012/05/24 You may use this database...scribed below. The Standard License specifies the license terms regarding the use of this database and the r...equirements you must follow in using this database. The Additional License specif...icense. Standard License The Standard License for this database is the license specified in the Creative Com...mons Attribution-Share Alike 2.1 Japan . If you use data from this database, plea

  7. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  8. When overweight is the normal weight: an examination of obesity using a social media internet database.

    Science.gov (United States)

    Kuebler, Meghan; Yom-Tov, Elad; Pelleg, Dan; Puhl, Rebecca M; Muennig, Peter

    2013-01-01

    Using a large social media database, Yahoo Answers, we explored postings to an online forum in which posters asked whether their height and weight qualify themselves as "skinny," "thin," "fat," or "obese" over time and across forum topics. We used these data to better understand whether a higher-than-average body mass index (BMI) in one's county might, in some ways, be protective for one's mental and physical health. For instance, we explored whether higher proportions of obese people in one's county predicts lower levels of bullying or "am I fat?" questions from those with a normal BMI relative to his/her actual BMI. Most women asking whether they were themselves fat/obese were not actually fat/obese. Both men and women who were actually overweight/obese were significantly more likely in the future to ask for advice about bullying than thinner individuals. Moreover, as mean county-level BMI increased, bullying decreased and then increased again (in a U-shape curve). Regardless of where they lived, posters who asked "am I fat?" who had a BMI in the healthy range were more likely than other posters to subsequently post on health problems, but the proportions of such posters also declined greatly as county-level BMI increased. Our findings suggest that obese people residing in counties with higher levels of BMI may have better physical and mental health than obese people living in counties with lower levels of BMI by some measures, but these improvements are modest.

  9. License - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us BodyParts3D License License to Use This Database Last updated : 2011/08/25 You may use this database...he license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attributi...on-Share Alike 2.1 Japan. If you use data from this database, please be sure attribute this database as foll...here . With regard to this database, you are licensed to: freely access part or whole of this database, and

  10. ADVICE--Educational System for Teaching Database Courses

    Science.gov (United States)

    Cvetanovic, M.; Radivojevic, Z.; Blagojevic, V.; Bojovic, M.

    2011-01-01

    This paper presents a Web-based educational system, ADVICE, that helps students to bridge the gap between database management system (DBMS) theory and practice. The usage of ADVICE is presented through a set of laboratory exercises developed to teach students conceptual and logical modeling, SQL, formal query languages, and normalization. While…

  11. A review of fission gas release data within the Nea/IAEA IFPE database

    International Nuclear Information System (INIS)

    Turnbull, J.A.; Menut, P.; Sartori, E.

    2002-01-01

    The paper describes the International Fuel Performance Experimental (IFPE) database on nuclear fuel performance. The aim of the project is to provide a comprehensive and well-qualified database on Zr clad UO 2 fuel for model development and code validation in the public domain. The data encompass both normal and off-normal operation and include prototypic commercial irradiations as well as experiments performed in material testing reactors. To date, the database contains some 380 individual cases, the majority of which provide data on FGR either from in-pile pressure measurements or PIE techniques including puncturing, electron probe microanalysis (EPMA) and X-ray fluorescence (XRF) measurements. The paper outlines parameters affecting fission gas release and highlights individual datasets addressing these issues. (authors)

  12. Pro Oracle database 11g RAC on Linux

    CERN Document Server

    Shaw, Steve

    2010-01-01

    Pro Oracle Database 11g RAC on Linux provides full-life-cycle guidance on implementing Oracle Real Application Clusters in a Linux environment. Real Application Clusters, commonly abbreviated as RAC, is Oracle's industry-leading architecture for scalable and fault-tolerant databases. RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides. Written by authors well-known for their talent with RAC, Pro Oracle Database

  13. The Normal Fetal Pancreas.

    Science.gov (United States)

    Kivilevitch, Zvi; Achiron, Reuven; Perlman, Sharon; Gilboa, Yinon

    2017-10-01

    The aim of the study was to assess the sonographic feasibility of measuring the fetal pancreas and its normal development throughout pregnancy. We conducted a cross-sectional prospective study between 19 and 36 weeks' gestation. The study included singleton pregnancies with normal pregnancy follow-up. The pancreas circumference was measured. The first 90 cases were tested to assess feasibility. Two hundred ninety-seven fetuses of nondiabetic mothers were recruited during a 3-year period. The overall satisfactory visualization rate was 61.6%. The intraobserver and interobserver variability had high interclass correlation coefficients of of 0.964 and 0.967, respectively. A cubic polynomial regression described best the correlation of pancreas circumference with gestational age (r = 0.744; P pancreas circumference percentiles for each week of gestation were calculated. During the study period, we detected 2 cases with overgrowth syndrome and 1 case with an annular pancreas. In this study, we assessed the feasibility of sonography for measuring the fetal pancreas and established a normal reference range for the fetal pancreas circumference throughout pregnancy. This database can be helpful when investigating fetomaternal disorders that can involve its normal development. © 2017 by the American Institute of Ultrasound in Medicine.

  14. Nomenclature and databases - The past, the present, and the future

    NARCIS (Netherlands)

    Jacobs, Jeffrey Phillip; Mavroudis, Constantine; Jacobs, Marshall Lewis; Maruszewski, Bohdan; Tchervenkov, Christo I.; Lacour-Gayet, Francois G.; Clarke, David Robinson; Gaynor, J. William; Spray, Thomas L.; Kurosawa, Hiromi; Stellin, Giovanni; Ebels, Tjark; Bacha, Emile A.; Walters, Henry L.; Elliott, Martin J.

    This review discusses the historical aspects, current state of the art, and potential future advances in the areas of nomenclature and databases for congenital heart disease. Five areas will be reviewed: (1) common language = nomenclature, (2) mechanism of data collection (database or registry) with

  15. The MIntAct project--IntAct as a common curation platform for 11 molecular interaction databases

    OpenAIRE

    Orchard, S; Ammari, M; Aranda, B; Breuza, L; Briganti, L; Broackes-Carter, F; Campbell, N; Chavali, G; Chen, C; del-Toro, N; Duesbury, M; Dumousseau, M; Galeota, E; Hinz, U; Iannuccelli, M

    2014-01-01

    IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate l...

  16. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  17. Prevalence, Risk Factors, and Outcome of Myocardial Infarction with Angiographically Normal and Near-Normal Coronary Arteries: A Systematic Review and Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Samad Ghaffari

    2016-12-01

    Full Text Available Context: Coronary artery diseases are mostly detected using angiographic methods demonstrating arteries status. Nevertheless, Myocardial Infarction (MI may occur in the presence of angiographically normal coronary arteries. Therefore, this study aimed to investigate the prevalence of MI with normal angiography and its possible etiologies in a systematic review. Evidence Acquisition: In this meta-analysis, the required data were collected from PubMed, Science Direct, Google Scholar, Scopus, Magiran, Scientific Information Database, and Medlib databases using the following keywords: “coronary angiograph”, “normal coronary arteries”, “near-normal coronary arteries”, “heart diseases”, “coronary artery disease”, “coronary disease”, “cardiac troponin I”, “Myocardial infarction”, “risk factor”, “prevalence”, “outcome”, and their Persian equivalents. Then, Comprehensive Meta-Analysis software, version 2 using randomized model was employed to determine the prevalence of each complication and perform the meta-analysis. P values less than 0.05 were considered to be statistically significant. Results: Totally, 20 studies including 139957 patients were entered into the analysis. The patients’ mean age was 47.62 ± 6.63 years and 64.4% of the patients were male. The prevalence of MI with normal or near-normal coronary arteries was 3.5% (CI = 95%, min = 2.2%, and max = 5.7%. Additionally, smoking and family history of cardiovascular diseases were the most important risk factors. The results showed no significant difference between MIs with normal angiography and 1- or 2-vessel involvement regarding the frequency of major adverse cardiac events (5.4% vs. 7.3%, P = 0.32. However, a significant difference was found between the patients with normal angiography and those with 3-vessel involvement in this regard (5.4% vs. 20.2%, P < 0.001. Conclusions: Although angiographic studies are required to assess the underlying

  18. A Review of Stellar Abundance Databases and the Hypatia Catalog Database

    Science.gov (United States)

    Hinkel, Natalie Rose

    2018-01-01

    The astronomical community is interested in elements from lithium to thorium, from solar twins to peculiarities of stellar evolution, because they give insight into different regimes of star formation and evolution. However, while some trends between elements and other stellar or planetary properties are well known, many other trends are not as obvious and are a point of conflict. For example, stars that host giant planets are found to be consistently enriched in iron, but the same cannot be definitively said for any other element. Therefore, it is time to take advantage of large stellar abundance databases in order to better understand not only the large-scale patterns, but also the more subtle, small-scale trends within the data.In this overview to the special session, I will present a review of large stellar abundance databases that are both currently available (i.e. RAVE, APOGEE) and those that will soon be online (i.e. Gaia-ESO, GALAH). Additionally, I will discuss the Hypatia Catalog Database (www.hypatiacatalog.com) -- which includes abundances from individual literature sources that observed stars within 150pc. The Hypatia Catalog currently contains 72 elements as measured within ~6000 stars, with a total of ~240,000 unique abundance determinations. The online database offers a variety of solar normalizations, stellar properties, and planetary properties (where applicable) that can all be viewed through multiple interactive plotting interfaces as well as in a tabular format. By analyzing stellar abundances for large populations of stars and from a variety of different perspectives, a wealth of information can be revealed on both large and small scales.

  19. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  20. Results of Database Studies in Spine Surgery Can Be Influenced by Missing Data.

    Science.gov (United States)

    Basques, Bryce A; McLynn, Ryan P; Fice, Michael P; Samuel, Andre M; Lukasiewicz, Adam M; Bohl, Daniel D; Ahn, Junyoung; Singh, Kern; Grauer, Jonathan N

    2017-12-01

    was computed and imputed. In the third regression, any variables with > 10% rate of missing data were removed from the regression; among variables with ≤ 10% missing data, individual cases with missing values were excluded. The results of these regressions were compared to determine how the different treatments of missing data could affect the results of spine studies using the ACS-NSQIP database. Of the 88,471 patients, as many as 4441 (5%) had missing elements among demographic data, 69,184 (72%) among comorbidities, 70,892 (80%) among preoperative laboratory values, and 56,551 (64%) among operating room times. Considering the three different treatments of missing data, we found different risk factors for adverse events. Of 44 risk factors found to be associated with adverse events in any analysis, only 15 (34%) of these risk factors were common among the three regressions. The second treatment of missing data (assuming "normal" value) found the most risk factors (40) to be associated with any adverse event, whereas the first treatment (deleting patients with missing data) found the fewest associations at 20. Among the risk factors associated with any adverse event, the 10 with the greatest effect size (odds ratio) by each regression were ranked. Of the 15 variables in the top 10 for any regression, six of these were common among all three lists. Differing treatments of missing data can influence the results of spine studies using the ACS-NSQIP. The current study highlights the importance of considering how such missing data are handled. Until there are better guidelines on the best approaches to handle missing data, investigators should report how missing data were handled to increase the quality and transparency of orthopaedic database research. Readers of large database studies should note whether handling of missing data was addressed and consider potential bias with high rates or unspecified or weak methods for handling missing data.

  1. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    Science.gov (United States)

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  2. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  3. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  4. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  5. When overweight is the normal weight: an examination of obesity using a social media internet database.

    Directory of Open Access Journals (Sweden)

    Meghan Kuebler

    Full Text Available Using a large social media database, Yahoo Answers, we explored postings to an online forum in which posters asked whether their height and weight qualify themselves as "skinny," "thin," "fat," or "obese" over time and across forum topics. We used these data to better understand whether a higher-than-average body mass index (BMI in one's county might, in some ways, be protective for one's mental and physical health. For instance, we explored whether higher proportions of obese people in one's county predicts lower levels of bullying or "am I fat?" questions from those with a normal BMI relative to his/her actual BMI. Most women asking whether they were themselves fat/obese were not actually fat/obese. Both men and women who were actually overweight/obese were significantly more likely in the future to ask for advice about bullying than thinner individuals. Moreover, as mean county-level BMI increased, bullying decreased and then increased again (in a U-shape curve. Regardless of where they lived, posters who asked "am I fat?" who had a BMI in the healthy range were more likely than other posters to subsequently post on health problems, but the proportions of such posters also declined greatly as county-level BMI increased. Our findings suggest that obese people residing in counties with higher levels of BMI may have better physical and mental health than obese people living in counties with lower levels of BMI by some measures, but these improvements are modest.

  6. FCDD: A Database for Fruit Crops Diseases.

    Science.gov (United States)

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  7. Assessment of 1H NMR-based metabolomics analysis for normalization of urinary metals against creatinine.

    Science.gov (United States)

    Cassiède, Marc; Nair, Sindhu; Dueck, Meghan; Mino, James; McKay, Ryan; Mercier, Pascal; Quémerais, Bernadette; Lacy, Paige

    2017-01-01

    Proton nuclear magnetic resonance ( 1 H NMR, or NMR) spectroscopy and inductively coupled plasma-mass spectrometry (ICP-MS) are commonly used for metabolomics and metal analysis in urine samples. However, creatinine quantification by NMR for the purpose of normalization of urinary metals has not been validated. We assessed the validity of using NMR analysis for creatinine quantification in human urine samples in order to allow normalization of urinary metal concentrations. NMR and ICP-MS techniques were used to measure metabolite and metal concentrations in urine samples from 10 healthy subjects. For metabolite analysis, two magnetic field strengths (600 and 700MHz) were utilized. In addition, creatinine concentrations were determined by using the Jaffe method. Creatinine levels were strongly correlated (R 2 =0.99) between NMR and Jaffe methods. The NMR spectra were deconvoluted with a target database containing 151 metabolites that are present in urine. A total of 50 metabolites showed good correlation (R 2 =0.7-1.0) at 600 and 700MHz. Metal concentrations determined after NMR-measured creatinine normalization were comparable to previous reports. NMR analysis provided robust urinary creatinine quantification, and was sufficient for normalization of urinary metal concentrations. We found that NMR-measured creatinine-normalized urinary metal concentrations in our control subjects were similar to general population levels in Canada and the United Kingdom. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Database implementation to fluidized cracking catalytic-FCC process

    International Nuclear Information System (INIS)

    Santana, Antonio Otavio de; Dantas, Carlos Costa; Santos, Valdemir A. dos

    2009-01-01

    A process of Fluidized Cracking Catalytic (FCC) was developed by our research group. A cold model FCC unit, in laboratory scale, was used for obtaining of the data relative to the following parameters: air flow, system pressure, riser inlet pressure, rise outlet pressure, pressure drop in the riser, motor speed of catalyst injection and density. The measured of the density is made by gamma ray transmission. For the fact of the process of FCC not to have a database until then, the present work supplied this deficiency with the implementation of a database in connection with the Matlab software. The data from the FCC unit (laboratory model) are obtained as spreadsheet of the MS-Excel software. These spreadsheets were treated before importing them as database tables. The application of the process of normalization of database and the analysis done with the MS-Access in these spreadsheets treated revealed the need of an only relation (table) for to represent the database. The Database Manager System (DBMS) chosen has been the MS-Access by to satisfy our flow of data. The next step was the creation of the database, being built the table of data, the action query, selection query and the macro for to import data from the unit FCC in study. Also an interface between the application 'Database Toolbox' (Matlab2008a) and the database was created. This was obtained through the drivers ODBC (Open Data Base Connectivity). This interface allows the manipulation of the database by the users operating in the Matlab. (author)

  9. Database implementation to fluidized cracking catalytic-FCC process

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Antonio Otavio de; Dantas, Carlos Costa, E-mail: aos@ufpe.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear; Santos, Valdemir A. dos, E-mail: valdemir.alexandre@pq.cnpq.b [Universidade Catolica de Pernambuco, Recife, PE (Brazil). Centro de Ciencia e Tecnologia

    2009-07-01

    A process of Fluidized Cracking Catalytic (FCC) was developed by our research group. A cold model FCC unit, in laboratory scale, was used for obtaining of the data relative to the following parameters: air flow, system pressure, riser inlet pressure, rise outlet pressure, pressure drop in the riser, motor speed of catalyst injection and density. The measured of the density is made by gamma ray transmission. For the fact of the process of FCC not to have a database until then, the present work supplied this deficiency with the implementation of a database in connection with the Matlab software. The data from the FCC unit (laboratory model) are obtained as spreadsheet of the MS-Excel software. These spreadsheets were treated before importing them as database tables. The application of the process of normalization of database and the analysis done with the MS-Access in these spreadsheets treated revealed the need of an only relation (table) for to represent the database. The Database Manager System (DBMS) chosen has been the MS-Access by to satisfy our flow of data. The next step was the creation of the database, being built the table of data, the action query, selection query and the macro for to import data from the unit FCC in study. Also an interface between the application 'Database Toolbox' (Matlab2008a) and the database was created. This was obtained through the drivers ODBC (Open Data Base Connectivity). This interface allows the manipulation of the database by the users operating in the Matlab. (author)

  10. An individual urinary proteome analysis in normal human beings to define the minimal sample number to represent the normal urinary proteome

    Directory of Open Access Journals (Sweden)

    Liu Xuejiao

    2012-11-01

    Full Text Available Abstract Background The urinary proteome has been widely used for biomarker discovery. A urinary proteome database from normal humans can provide a background for discovery proteomics and candidate proteins/peptides for targeted proteomics. Therefore, it is necessary to define the minimum number of individuals required for sampling to represent the normal urinary proteome. Methods In this study, inter-individual and inter-gender variations of urinary proteome were taken into consideration to achieve a representative database. An individual analysis was performed on overnight urine samples from 20 normal volunteers (10 males and 10 females by 1DLC/MS/MS. To obtain a representative result of each sample, a replicate 1DLCMS/MS analysis was performed. The minimal sample number was estimated by statistical analysis. Results For qualitative analysis, less than 5% of new proteins/peptides were identified in a male/female normal group by adding a new sample when the sample number exceeded nine. In addition, in a normal group, the percentage of newly identified proteins/peptides was less than 5% upon adding a new sample when the sample number reached 10. Furthermore, a statistical analysis indicated that urinary proteomes from normal males and females showed different patterns. For quantitative analysis, the variation of protein abundance was defined by spectrum count and western blotting methods. And then the minimal sample number for quantitative proteomic analysis was identified. Conclusions For qualitative analysis, when considering the inter-individual and inter-gender variations, the minimum sample number is 10 and requires a balanced number of males and females in order to obtain a representative normal human urinary proteome. For quantitative analysis, the minimal sample number is much greater than that for qualitative analysis and depends on the experimental methods used for quantification.

  11. Structure and needs of global loss databases about natural disaster

    Science.gov (United States)

    Steuer, Markus

    2010-05-01

    Global loss databases are used for trend analyses and statistics in scientific projects, studies for governmental and nongovernmental organizations and for the insurance and finance industry as well. At the moment three global data sets are established: EM-DAT (CRED), Sigma (Swiss Re) and NatCatSERVICE (Munich Re). Together with the Asian Disaster Reduction Center (ADRC) and United Nations Development Program (UNDP) started a collaborative initiative in 2007 with the aim to agreed on and implemented a common "Disaster Category Classification and Peril Terminology for Operational Databases". This common classification has been established through several technical meetings and working groups and represents a first and important step in the development of a standardized international classification of disasters and terminology of perils. This means concrete to set up a common hierarchy and terminology for all global and regional databases on natural disasters and establish a common and agreed definition of disaster groups, main types and sub-types of events. Also the theme of georeferencing, temporal aspects, methodology and sourcing were other issues that have been identified and will be discussed. The implementation of the new and defined structure for global loss databases is already set up for Munich Re NatCatSERVICE. In the following oral session we will show the structure of the global databases as defined and in addition to give more transparency of the data sets behind published statistics and analyses. The special focus will be on the catastrophe classification from a moderate loss event up to a great natural catastrophe, also to show the quality of sources and give inside information about the assessment of overall and insured losses. Keywords: disaster category classification, peril terminology, overall and insured losses, definition

  12. CMLOG: A common message logging system

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Bickley, M.; Wu, D.; Watson, W. III

    1997-01-01

    The Common Message Logging (CMLOG) system is an object-oriented and distributed system that not only allows applications and systems to log data (messages) of any type into a centralized database but also lets applications view incoming messages in real-time or retrieve stored data from the database according to selection rules. It consists of a concurrent Unix server that handles incoming logging or searching messages, a Motif browser that can view incoming messages in real-time or display stored data in the database, a client daemon that buffers and sends logging messages to the server, and libraries that can be used by applications to send data to or retrieve data from the database via the server. This paper presents the design and implementation of the CMLOG system meanwhile it will also address the issue of integration of CMLOG into existing control systems. CMLOG into existing control systems

  13. Developing of corrosion and creep property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, J. S.; Ryu, W. S.

    2004-01-01

    The corrosion and creep characteristics database systems were constructed using the data produced from corrosion and creep test and designed to hold in common the data and programs of tensile, impact, fatigue characteristics database that was constructed since 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the corrosion and creep characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous test result. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  14. Developing of impact and fatigue property test database system

    International Nuclear Information System (INIS)

    Park, S. J.; Jun, I.; Kim, D. H.; Ryu, W. S.

    2003-01-01

    The impact and fatigue characteristics database systems were constructed using the data produced from impact and fatigue test and designed to hold in common the data and programs of tensile characteristics database that was constructed on 2001 and others characteristics databases that will be constructed in future. We can easily get the basic data from the impact and fatigue characteristics database systems when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, we describe the procedure about analysis, design and development of the impact and fatigue characteristics database systems developed by internet method using jsp(Java Server pages) tool

  15. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  16. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  17. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  18. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  19. Fund Finder: A case study of database-to-ontology mapping

    OpenAIRE

    Barrasa Rodríguez, Jesús; Corcho, Oscar; Gómez-Pérez, A.

    2003-01-01

    The mapping between databases and ontologies is a basic problem when trying to "upgrade" deep web content to the semantic web. Our approach suggests the declarative definition of mappings as a way to achieve domain independency and reusability. A specific language (expressive enough to cover some real world mapping situations like lightly structured databases or not 1st normal form ones) is defined for this purpose. Along with this mapping description language, the ODEMapster processor is in ...

  20. European multicentre database of healthy controls for [(123)I]FP-CIT SPECT (ENC-DAT)

    DEFF Research Database (Denmark)

    Varrone, Andrea; Dickson, John C; Tossici-Bolt, Livia

    2013-01-01

    of quantification is the availability of normative data, considering possible age and gender effects on DAT availability. The aim of the European Normal Control Database of DaTSCAN (ENC-DAT) study was to generate a large database of [(123)I]FP-CIT SPECT scans in healthy controls....

  1. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  2. PERANCANGAN DATABASE TELECENTER - JATIM BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Erma Suryani

    2006-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Sejalan dengan visi BPDE Jatim untuk menjadi instansi teknis terdepan dalam mengelola data dan informasi yang berbasis pemanfaatan teknologi informasi., maka perlu kiranya bagi instansi ini untuk membentuk Pusat Data Propinsi yang terpadu guna mendukung penyelenggarakan pemerintahan, pembangunan dan pelayanan yang baik kepada masyarakat. Perancangan ini dibuat dengan menggunakan database MySql, serta PHP yang dapat mengerjakan semua yang dapat dikerjakan oleh program CGI, seperti mendapatkan data dari form, menghasilkan isi halaman web yang dinamik dan menerima cookies. Dengan aplikasi ini diharapkan natinya BPDE dapat menyediakan dan menyebarluasan informasi untuk Pemerintah dan masyarakat dengan menggunakan sistem informasi dan telematika dalam rangka terciptanya budaya informasi. Kata Kunci: Database, MySQL , BPDE (Badan Pengolahan Data Elektronik, PHP

  3. Congenital anomalies and normal skeletal variants

    International Nuclear Information System (INIS)

    Guebert, G.M.; Yochum, T.R.; Rowe, L.J.

    1987-01-01

    Congenital anomalies and normal skeletal variants are a common occurrence in clinical practice. In this chapter a large number of skeletal anomalies of the spine and pelvis are reviewed. Some of the more common skeletal anomalies of the extremities are also presented. The second section of this chapter deals with normal skeletal variants. Some of these variants may simulate certain disease processes. In some instances there are no clear-cut distinctions between skeletal variants and anomalies; therefore, there may be some overlap of material. The congenital anomalies are presented initially with accompanying text, photos, and references, beginning with the skull and proceeding caudally through the spine to then include the pelvis and extremities. The normal skeletal variants section is presented in an anatomical atlas format without text or references

  4. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  5. Development of Vision Based Multiview Gait Recognition System with MMUGait Database

    Directory of Open Access Journals (Sweden)

    Hu Ng

    2014-01-01

    Full Text Available This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.

  6. Learning lessons from Natech accidents - the eNATECH accident database

    Science.gov (United States)

    Krausmann, Elisabeth; Girgin, Serkan

    2016-04-01

    When natural hazards impact industrial facilities that house or process hazardous materials, fires, explosions and toxic releases can occur. This type of accident is commonly referred to as Natech accident. In order to prevent the recurrence of accidents or to better mitigate their consequences, lessons-learned type studies using available accident data are usually carried out. Through post-accident analysis, conclusions can be drawn on the most common damage and failure modes and hazmat release paths, particularly vulnerable storage and process equipment, and the hazardous materials most commonly involved in these types of accidents. These analyses also lend themselves to identifying technical and organisational risk-reduction measures that require improvement or are missing. Industrial accident databases are commonly used for retrieving sets of Natech accident case histories for further analysis. These databases contain accident data from the open literature, government authorities or in-company sources. The quality of reported information is not uniform and exhibits different levels of detail and accuracy. This is due to the difficulty of finding qualified information sources, especially in situations where accident reporting by the industry or by authorities is not compulsory, e.g. when spill quantities are below the reporting threshold. Data collection has then to rely on voluntary record keeping often by non-experts. The level of detail is particularly non-uniform for Natech accident data depending on whether the consequences of the Natech event were major or minor, and whether comprehensive information was available for reporting. In addition to the reporting bias towards high-consequence events, industrial accident databases frequently lack information on the severity of the triggering natural hazard, as well as on failure modes that led to the hazmat release. This makes it difficult to reconstruct the dynamics of the accident and renders the development of

  7. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  8. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  9. License - Nikkaji-InChI Mapping Table | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Nikkaji-InChI Mapping Table License License to Use This Database Last updated : 2015/05/22 You may use this database...ense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative C...ommons Attribution 2.1 Japan . If you use data from this database, please be sure attribute this database as... . With regard to this database, you are licensed to: freely access part or whole of this database, and acqu

  10. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  11. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  12. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  13. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  14. Soil Properties Database of Spanish Soils Volume I.-Galicia

    International Nuclear Information System (INIS)

    Trueba, C.; Millan, R.; Schmid, T.; Roquero, C.; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-13 7 and Sr-90. The Department de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim. a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary)' source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Galicia

  15. Data-driven intensity normalization of PET group comparison studies is superior to global mean normalization

    DEFF Research Database (Denmark)

    Borghammer, Per; Aanerud, Joel; Gjedde, Albert

    2009-01-01

    BACKGROUND: Global mean (GM) normalization is one of the most commonly used methods of normalization in PET and SPECT group comparison studies of neurodegenerative disorders. It requires that no between-group GM difference is present, which may be strongly violated in neurodegenerative disorders....... Importantly, such GM differences often elude detection due to the large intrinsic variance in absolute values of cerebral blood flow or glucose consumption. Alternative methods of normalization are needed for this type of data. MATERIALS AND METHODS: Two types of simulation were performed using CBF images...

  16. The Danish Nonmelanoma Skin Cancer Dermatology Database.

    Science.gov (United States)

    Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke; Vinding, Gabrielle Randskov; Stender, Ida Marie; Jemec, Gregor Borut Ernst; Vestergaard, Tine; Thormann, Henrik; Hædersdal, Merete; Dam, Tomas Norman; Olesen, Anne Braae

    2016-01-01

    The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents a significant challenge in terms of public health management and health care costs. However, high-quality epidemiological and treatment data on NMSC are sparse. The NMSC database includes patients with the following skin tumors: basal cell carcinoma (BCC), squamous cell carcinoma, Bowen's disease, and keratoacanthoma diagnosed by the participating office-based dermatologists in Denmark. Clinical and histological diagnoses, BCC subtype, localization, size, skin cancer history, skin phototype, and evidence of metastases and treatment modality are the main variables in the NMSC database. Information on recurrence, cosmetic results, and complications are registered at two follow-up visits at 3 months (between 0 and 6 months) and 12 months (between 6 and 15 months) after treatment. In 2014, 11,522 patients with 17,575 tumors were registered in the database. Of tumors with a histological diagnosis, 13,571 were BCCs, 840 squamous cell carcinomas, 504 Bowen's disease, and 173 keratoakanthomas. The NMSC database encompasses detailed information on the type of tumor, a variety of prognostic factors, treatment modalities, and outcomes after treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance.

  17. Validation of the Provincial Transfer Authorization Centre database: a comprehensive database containing records of all inter-facility patient transfers in the province of Ontario

    Directory of Open Access Journals (Sweden)

    MacDonald Russell D

    2006-10-01

    Full Text Available Abstract Background The Provincial Transfer Authorization Centre (PTAC was established as a part of the emergency response in Ontario, Canada to the Severe Acute Respiratory Syndrome (SARS outbreak in 2003. Prior to 2003, data relating to inter-facility patient transfers were not collected in a systematic manner. Then, in an emergency setting, a comprehensive database with a complex data collection process was established. For the first time in Ontario, population-based data for patient movement between healthcare facilities for a population of twelve million are available. The PTAC database stores all patient transfer data in a large database. There are few population-based patient transfer databases and the PTAC database is believed to be the largest example to house this novel dataset. A patient transfer database has also never been validated. This paper presents the validation of the PTAC database. Methods A random sample of 100 patient inter-facility transfer records was compared to the corresponding institutional patient records from the sending healthcare facilities. Measures of agreement, including sensitivity, were calculated for the 12 common data variables. Results Of the 100 randomly selected patient transfer records, 95 (95% of the corresponding institutional patient records were located. Data variables in the categories patient demographics, facility identification and timing of transfer and reason and urgency of transfer had strong agreement levels. The 10 most commonly used data variables had accuracy rates that ranged from 85.3% to 100% and error rates ranging from 0 to 12.6%. These same variables had sensitivity values ranging from 0.87 to 1.0. Conclusion The very high level of agreement between institutional patient records and the PTAC data for fields compared in this study supports the validity of the PTAC database. For the first time, a population-based patient transfer database has been established. Although it was created

  18. ALFRED: An Allele Frequency Database for Microevolutionary Studies

    Directory of Open Access Journals (Sweden)

    Kenneth K Kidd

    2005-01-01

    Full Text Available Many kinds of microevolutionary studies require data on multiple polymorphisms in multiple populations. Increasingly, and especially for human populations, multiple research groups collect relevant data and those data are dispersed widely in the literature. ALFRED has been designed to hold data from many sources and make them available over the web. Data are assembled from multiple sources, curated, and entered into the database. Multiple links to other resources are also established by the curators. A variety of search options are available and additional geographic based interfaces are being developed. The database can serve the human anthropologic genetic community by identifying what loci are already typed on many populations thereby helping to focus efforts on a common set of markers. The database can also serve as a model for databases handling similar DNA polymorphism data for other species.

  19. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    Science.gov (United States)

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  1. Analysis of commercial and public bioactivity databases.

    Science.gov (United States)

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  2. Cortical thinning in cognitively normal elderly cohort of 60 to 89 year old from AIBL database and vulnerable brain areas

    Science.gov (United States)

    Lin, Zhongmin S.; Avinash, Gopal; Yan, Litao; McMillan, Kathryn

    2014-03-01

    Age-related cortical thinning has been studied by many researchers using quantitative MR images for the past three decades and vastly differing results have been reported. Although results have shown age-related cortical thickening in elderly cohort statistically in some brain regions under certain conditions, cortical thinning in elderly cohort requires further systematic investigation. This paper leverages our previously reported brain surface intensity model (BSIM)1 based technique to measure cortical thickness to study cortical changes due to normal aging. We measured cortical thickness of cognitively normal persons from 60 to 89 years old using Australian Imaging Biomarkers and Lifestyle Study (AIBL) data. MRI brains of 56 healthy people including 29 women and 27 men were selected. We measured average cortical thickness of each individual in eight brain regions: parietal, frontal, temporal, occipital, visual, sensory motor, medial frontal and medial parietal. Unlike the previous published studies, our results showed consistent age-related thinning of cerebral cortex in all brain regions. The parietal, medial frontal and medial parietal showed fastest thinning rates of 0.14, 0.12 and 0.10 mm/decade respectively while the visual region showed the slowest thinning rate of 0.05 mm/decade. In sensorimotor and parietal areas, women showed higher thinning (0.09 and 0.16 mm/decade) than men while in all other regions men showed higher thinning than women. We also created high resolution cortical thinning rate maps of the cohort and compared them to typical patterns of PET metabolic reduction of moderate AD and frontotemporal dementia (FTD). The results seemed to indicate vulnerable areas of cortical deterioration that may lead to brain dementia. These results validate our cortical thickness measurement technique by demonstrating the consistency of the cortical thinning and prediction of cortical deterioration trend with AIBL database.

  3. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  4. Multi-centre, multi-database studies with common protocols : lessons learnt from the IMI PROTECT project

    NARCIS (Netherlands)

    Klungel, Olaf H; Kurz, Xavier; de Groot, Mark C H; Schlienger, Raymond G; Tcherny-Lessenot, Stephanie; Grimaldi, Lamiae; Ibáñez, Luisa; Groenwold, Rolf H H; Reynolds, Robert F

    2016-01-01

    PURPOSE: To assess the impact of a variety of methodological parameters on the association between six drug classes and five key adverse events in multiple databases. METHODS: The selection of Drug-Adverse Event pairs was based on public health impact, regulatory relevance, and the possibility to

  5. IQ-SPECT for thallium-201 myocardial perfusion imaging: effect of normal databases on quantification.

    Science.gov (United States)

    Konishi, Takahiro; Nakajima, Kenichi; Okuda, Koichi; Yoneyama, Hiroto; Matsuo, Shinro; Shibutani, Takayuki; Onoguchi, Masahisa; Kinuya, Seigo

    2017-07-01

    Although IQ-single-photon emission computed tomography (SPECT) provides rapid acquisition and attenuation-corrected images, the unique technology may create characteristic distribution different from the conventional imaging. This study aimed to compare the diagnostic performance of IQ-SPECT using Japanese normal databases (NDBs) with that of the conventional SPECT for thallium-201 ( 201 Tl) myocardial perfusion imaging (MPI). A total of 36 patients underwent 1-day 201 Tl adenosine stress-rest MPI. Images were acquired with IQ-SPECT at approximately one-quarter of the standard time of conventional SPECT. Projection data acquired with the IQ-SPECT system were reconstructed via an ordered subset conjugate gradient minimizer method with or without scatter and attenuation correction (SCAC). Projection data obtained using the conventional SPECT were reconstructed via a filtered back projection method without SCAC. The summed stress score (SSS) was calculated using NDBs created by the Japanese Society of Nuclear Medicine working group, and scores were compared between IQ-SPECT and conventional SPECT using the acquisition condition-matched NDBs. The diagnostic performance of the methods for the detection of coronary artery disease was also compared. SSSs were 6.6 ± 8.2 for the conventional SPECT, 6.6 ± 9.4 for IQ-SPECT without SCAC, and 6.5 ± 9.7 for IQ-SPECT with SCAC (p = n.s. for each comparison). The SSS showed a strong positive correlation between conventional SPECT and IQ-SPECT (r = 0.921 and p IQ-SPECT with and without SCAC was also good (r = 0.907 and p IQ-SPECT without SCAC; and 88.5, 86.8, and 87.3%, respectively, for IQ-SPECT with SCAC, respectively. The area under the curve obtained via receiver operating characteristic analysis were 0.77, 0.80, and 0.86 for conventional SPECT, IQ-SPECT without SCAC, and IQ-SPECT with SCAC, respectively (p = n.s. for each comparison). When appropriate NDBs were used, the diagnostic performance of 201 Tl IQ

  6. Development of a neoclassical transport database by neural network fitting in LHD

    International Nuclear Information System (INIS)

    Wakasa, Arimitsu; Oikawa, Shun-ichi; Murakami, Sadayoshi; Yamada, Hiroshi; Yokoyama, Masayuki; Watanabe, Kiyomasa; Maassberg, Hening; Beidler, Craig D.

    2004-01-01

    A database of neoclassical transport coefficients for the Large Helical Device is developed using normalized mono-energetic diffusion coefficients evaluated by Monte Carlo simulation code; DCOM. A neural network fitting method is applied to take energy convolutions with the given distribution function, e.g. Maxwellian. The database gives the diffusion coefficients as a function of the collision frequency, the radial electric field and the minor radius position. (author)

  7. Normal variation of hepatic artery

    International Nuclear Information System (INIS)

    Kim, Inn; Nam, Myung Hyun; Rhim, Hyun Chul; Koh, Byung Hee; Seo, Heung Suk; Kim, Soon Yong

    1987-01-01

    This study was an analyses of blood supply of the liver in 125 patients who received hepatic arteriography and abdominal aortography from Jan. 1984 to Dec. 1986 at the Department of Radiology of Hanyang University Hospital. A. Variations in extrahepatic arteries: 1. The normal extrahepatic artery pattern occurred in 106 of 125 cases (84.8%) ; Right hepatic and left hepatic arteries arising from the hepatic artery proper and hepatic artery proper arising from the common hepatic artery. 2. The most common type of variation of extrahepatic artery was replaced right hepatic artery from superior mesenteric artery: 6 of 125 cases (4.8%). B. Variations in intrahepatic arteries: 1. The normal intrahepatic artery pattern occurred in 83 of 125 cases (66.4%). Right hepatic and left hepatic arteries arising from the hepatic artery proper and middle hepatic artery arising from lower portion of the umbilical point of left hepatic artery. 2. The most common variation of intrahepatic arteries was middle hepatic artery. 3. Among the variation of middle hepatic artery; Right, middle and left hepatic arteries arising from the same location at the hepatic artery proper was the most common type; 17 of 125 cases (13.6%)

  8. Design of multi-tiered database application based on CORBA component in SDUV-FEL system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin

    2004-01-01

    The drawback of usual two-tiered database architecture was analyzed and the Shanghai Deep Ultraviolet-Free Electron Laser database system under development was discussed. A project for realizing the multi-tiered database architecture based on common object request broker architecture (CORBA) component and middleware model constructed by C++ was presented. A magnet database was given to exhibit the design of the CORBA component. (authors)

  9. ICDE project report: collection and analysis of common-cause failures of batteries

    International Nuclear Information System (INIS)

    2003-12-01

    This report documents a study performed on the set of Common Cause Failure (CCF) events of batteries (BT). the events studied here were derived from the International CCF Data Exchange (ICDE) database, with contributions from organizations from several countries. 50 events in the ICDE database were studied by tabulating the data and observing the trends. The data span a period from 1980 through 2000. The database contains general information about event attributes such as root cause, coupling factor, common cause component group (CCCG) size, and corrective action. The objective of the report was also to develop the failure mechanisms and phenomena involved in the events, their relationship to the root causes, and possibilities for improvement

  10. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  11. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  12. Drug interaction databases in medical literature

    DEFF Research Database (Denmark)

    Kongsholm, Gertrud Gansmo; Nielsen, Anna Katrine Toft; Damkier, Per

    2015-01-01

    PURPOSE: It is well documented that drug-drug interaction databases (DIDs) differ substantially with respect to classification of drug-drug interactions (DDIs). The aim of this study was to study online available transparency of ownership, funding, information, classifications, staff training...... available transparency of ownership, funding, information, classifications, staff training, and underlying documentation varies substantially among various DIDs. Open access DIDs had a statistically lower score on parameters assessed....... and the three most commonly used subscription DIDs in the medical literature. The following parameters were assessed for each of the databases: Ownership, classification of interactions, primary information sources, and staff qualification. We compared the overall proportion of yes/no answers from open access...

  13. Using the TIGR gene index databases for biological discovery.

    Science.gov (United States)

    Lee, Yuandan; Quackenbush, John

    2003-11-01

    The TIGR Gene Index web pages provide access to analyses of ESTs and gene sequences for nearly 60 species, as well as a number of resources derived from these. Each species-specific database is presented using a common format with a homepage. A variety of methods exist that allow users to search each species-specific database. Methods implemented currently include nucleotide or protein sequence queries using WU-BLAST, text-based searches using various sequence identifiers, searches by gene, tissue and library name, and searches using functional classes through Gene Ontology assignments. This protocol provides guidance for using the Gene Index Databases to extract information.

  14. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  15. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  16. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Divisions's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed off line from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving off-line data analysis environment on the USC computers

  17. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  18. SOME ASPECTS REGARDING THE INTERNATIONAL DATABASES NOWADAYS

    Directory of Open Access Journals (Sweden)

    Emilian M. DOBRESCU

    2015-01-01

    Full Text Available A national database (NDB or an international one (abbreviated IDB, also named often as “data bank”, represents a method of storing some information and data on an external device (a storage device, with the possibility of an easy extension or an easy way to quickly find these information. Therefore, through IDB we don`t only understand a bibliometric or bibliographic index, which is a collection of references, that normally represents the “soft”, but also the respective IDB “hard”, which is the support and the storage technology. Usually, a database – a very comprehensive notion in the computer’s science – is a bibliographic index, compiled with specific purpose, objectives and means. In reality, the national and international databases are operated through management systems, usually electronic and informational, based on advanced manipulation technologies in the virtual space. On line encyclopedias can also be considered and are important international database (IDB. WorldCat, for example, is a world catalogue, that included the identification data for the books within circa 71.000 libraries in 112 countries, data classified through Online Computer Library Center (OCLC, with the participation of the libraries in the respective countries, especially of those that are national library.

  19. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  20. Series of agriculture in the statistical office of the Republic of Serbia database

    Directory of Open Access Journals (Sweden)

    Stojanović-Radović Jelena

    2015-01-01

    Full Text Available The objectives of this Paper have been to examine which data on agriculture can be found in the Statistical Office of the Republic of Serbia Database, and what are the possibilities for the use of the Database in the research and analysis of agriculture. The Statistical Office of the Republic of Serbia Database physically represents normalized database formed in DBMS SQL Server. The methodological approach to the Paper subject is primarily related to modelling and the way of using Database. The options of accession, filtering and downloading of data from the Database are explained. The technical characteristics of the Database were described, indicators of agriculture listed and the possibilities of using Database were analysed. We examined whether these possibilities could be improved. It was concluded that improvements were possible, first, by enriching Database with data that are now only available in printed publications of the Office, and then, through methodological and technical improvements by redesigning the Database modelled on cloud founded databases. Also, the application of the achievements of the new multidisciplinary scientific field - Visual Analytics would improve visualization, interactive data analysis and data management.

  1. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  2. Database of episode-integrated solar energetic proton fluences

    Science.gov (United States)

    Robinson, Zachary D.; Adams, James H.; Xapsos, Michael A.; Stauffer, Craig A.

    2018-04-01

    A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8) and the Geostationary Operational Environmental Satellites (GOES) series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  3. Database of episode-integrated solar energetic proton fluences

    Directory of Open Access Journals (Sweden)

    Robinson Zachary D.

    2018-01-01

    Full Text Available A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8 and the Geostationary Operational Environmental Satellites (GOES series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  4. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  5. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  6. Soil Properties Database of Spanish Soils Volume III.- Extremadura

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Roquero, C; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-13 7 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalized and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Extremadura. (Author) 50 refs

  7. Soil Properties Database of Spanish Soils. Volume V.- Madrid

    International Nuclear Information System (INIS)

    Trueba, C.; Millan, R.; Schmid, T.; Roquero, C.; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Madrid. (Author) 39 refs

  8. Soil Properties Database of Spanish Soils. Volume XV.- Aragon

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Lago, C.; Roquero, C; Magister, M.

    1999-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma of Aragon. (Author) 47 refs

  9. Soil Properties Database of Spanish Soils. Volume XIV.- Cataluna

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Lago, C.; Roquero, C; Magister, M.

    1999-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma of Cataluna. (Author) 57 refs

  10. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  11. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  12. Qualitative Comparison of IGRA and ESRL Radiosonde Archived Databases

    Science.gov (United States)

    Walker, John R.

    2014-01-01

    Multiple databases of atmospheric profile information are freely available to individuals and groups such as the Natural Environments group. Two of the primary database archives provided by NOAA that are most frequently used are those from the Earth Science Research Laboratory (ESRL) and the Integrated Global Radiosonde Archive (IGRA). Inquiries have been made as to why one database is used as opposed to the other, yet to the best of knowledge, no formal comparison has been performed. The goal of this study is to provide a qualitative comparison of the ESRL and IGRA radiosonde databases. For part of this analyses, 14 upper air observation sites were selected. These sites all have the common attribute of having been used or are planned for use in the development of Range Reference Atmospheres (RRAs) in support of NASA's and DOD's current and future goals.

  13. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  14. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  15. License - Budding yeast cDNA sequencing project | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Budding yeast cDNA sequencing project License to Use This Database Last updated : 2010/02/15 You may use this databas...ional License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additiona...n the Standard License. Standard License The Standard License for this database is the license specified in ...the Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  16. A group of facial normal descriptors for recognizing 3D identical twins

    KAUST Repository

    Li, Huibin

    2012-09-01

    In this paper, to characterize and distinguish identical twins, three popular texture descriptors: i.e. local binary patterns (LBPs), gabor filters (GFs) and local gabor binary patterns (LGBPs) are employed to encode the normal components (x, y and z) of the 3D facial surfaces of identical twins respectively. A group of facial normal descriptors are thus achieved, including Normal Local Binary Patterns descriptor (N-LBPs), Normal Gabor Filters descriptor (N-GFs) and Normal Local Gabor Binary Patterns descriptor (N-LGBPs). All these normal encoding based descriptors are further fed into sparse representation classifier (SRC) for identification. Experimental results on the 3D TEC database demonstrate that these proposed normal encoding based descriptors are very discriminative and efficient, achieving comparable performance to the best of state-of-the-art algorithms. © 2012 IEEE.

  17. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  18. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  19. Advanced Neuropsychological Diagnostics Infrastructure (ANDI): A Normative Database Created from Control Datasets.

    Science.gov (United States)

    de Vent, Nathalie R; Agelink van Rentergem, Joost A; Schmand, Ben A; Murre, Jaap M J; Huizenga, Hilde M

    2016-01-01

    In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  20. PseudoMLSA: a database for multigenic sequence analysis of Pseudomonas species

    Directory of Open Access Journals (Sweden)

    Lalucat Jorge

    2010-04-01

    Full Text Available Abstract Background The genus Pseudomonas comprises more than 100 species of environmental, clinical, agricultural, and biotechnological interest. Although, the recommended method for discriminating bacterial species is DNA-DNA hybridisation, alternative techniques based on multigenic sequence analysis are becoming a common practice in bacterial species discrimination studies. Since there is not a general criterion for determining which genes are more useful for species resolution; the number of strains and genes analysed is increasing continuously. As a result, sequences of different genes are dispersed throughout several databases. This sequence information needs to be collected in a common database, in order to be useful for future identification-based projects. Description The PseudoMLSA Database is a comprehensive database of multiple gene sequences from strains of Pseudomonas species. The core of the database is composed of selected gene sequences from all Pseudomonas type strains validly assigned to the genus through 2008. The database is aimed to be useful for MultiLocus Sequence Analysis (MLSA procedures, for the identification and characterisation of any Pseudomonas bacterial isolate. The sequences are available for download via a direct connection to the National Center for Biotechnology Information (NCBI. Additionally, the database includes an online BLAST interface for flexible nucleotide queries and similarity searches with the user's datasets, and provides a user-friendly output for easily parsing, navigating, and analysing BLAST results. Conclusions The PseudoMLSA database amasses strains and sequence information of validly described Pseudomonas species, and allows free querying of the database via a user-friendly, web-based interface available at http://www.uib.es/microbiologiaBD/Welcome.html. The web-based platform enables easy retrieval at strain or gene sequence information level; including references to published peer

  1. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  2. Analysis of NASA Common Research Model Dynamic Data

    Science.gov (United States)

    Balakrishna, S.; Acheson, Michael J.

    2011-01-01

    Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.

  3. Comparison of normalized gain and Cohen's d for analyzing gains on concept inventories

    Science.gov (United States)

    Nissen, Jayson M.; Talbot, Robert M.; Nasim Thompson, Amreen; Van Dusen, Ben

    2018-06-01

    Measuring student learning is a complicated but necessary task for understanding the effectiveness of instruction and issues of equity in college science, technology, engineering, and mathematics (STEM) courses. Our investigation focused on the implications on claims about student learning that result from choosing between one of two commonly used metrics for analyzing shifts in concept inventories. The metrics are normalized gain (g ), which is the most common method used in physics education research and other discipline based education research fields, and Cohen's d , which is broadly used in education research and many other fields. Data for the analyses came from the Learning About STEM Student Outcomes (LASSO) database and included test scores from 4551 students on physics, chemistry, biology, and math concept inventories from 89 courses at 17 institutions from across the United States. We compared the two metrics across all the concept inventories. The results showed that the two metrics lead to different inferences about student learning and equity due to the finding that g is biased in favor of high pretest populations. We discuss recommendations for the analysis and reporting of findings on student learning data.

  4. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  5. BioMart Central Portal: an open database network for the biological community

    Science.gov (United States)

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek

    2011-01-01

    BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507

  6. Basic characterization of normal multifocal electroretinogram

    International Nuclear Information System (INIS)

    Fernandez Cherkasova, Lilia; Rojas Rondon, Irene; Castro Perez, Pedro Daniel; Lopez Felipe, Daniel; Santiesteban Freixas, Rosaralis; Mendoza Santiesteban, Carlos E

    2008-01-01

    A scientific literature review was made on the novel multifocal electroretinogram technique, the involved cell mechanisms and some of the factors modifying its results together with the form of presentation. The basic characteristics of this electrophysiological record obtained from several regions of the retina of normal subjects is important in order to create at a small scale a comparative database to evaluate pathological eye tracing. All this will greatly help in early less invasive electrodiagnosis of localized retinal lesions. (Author)

  7. SNaX: A Database of Supernova X-Ray Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Mathias; Dwarkadas, Vikram V., E-mail: Mathias_Ross@msn.com, E-mail: vikram@oddjob.uchicago.edu [Astronomy and Astrophysics, University of Chicago, 5640 S Ellis Avenue, ERC 569, Chicago, IL 60637 (United States)

    2017-06-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  8. SNaX: A Database of Supernova X-Ray Light Curves

    International Nuclear Information System (INIS)

    Ross, Mathias; Dwarkadas, Vikram V.

    2017-01-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  9. Haematological and Histological Changes in Common Carp ...

    African Journals Online (AJOL)

    ... a worsening effect of excess dietary copper exposure on the fish. Gills and intestines of both diet 2 and diet 3 were normal during and after exposure, but fatty change was observed throughout the experiment. In conclusion, increasing the copper level of common carp, which it required for its normal physiological function, ...

  10. PV System Performance Evaluation by Clustering Production Data to Normal and Non-Normal Operation

    NARCIS (Netherlands)

    Tsafarakis, O.; Sinapis, K.; van Sark, W.G.J.H.M.

    2018-01-01

    The most common method for assessment of a photovoltaic (PV) system performance is by comparing its energy production to reference data (irradiance or neighboring PV system). Ideally, at normal operation, the compared sets of data tend to show a linear relationship. Deviations from this linearity

  11. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  12. Thermodynamic Database for Zirconium Alloys

    International Nuclear Information System (INIS)

    Jerlerud Perez, Rosa

    2003-05-01

    For many decades zirconium alloys have been commonly used in the nuclear power industry as fuel cladding material. Besides their good corrosion resistance and acceptable mechanical properties the main reason of using these alloys is the low neutron absorption. Zirconium alloys are exposed to a very severe environment during the nuclear fission process and there is a demand for better design of this material. To meet this requirement a thermodynamic database is developed to support material designers. In this thesis some aspects about the development of a thermodynamic database for zirconium alloys are presented. A thermodynamic database represents an important facility in applying thermodynamic equilibrium calculations for a given material providing: 1) relevant information about the thermodynamic properties of the alloys e.g. enthalpies, activities, heat capacity, and 2) significant information for the manufacturing process e.g. heat treatment temperature. The basic information in the database is first the unary data, i.e. pure elements; those are taken from the compilation of the Scientific Group Thermodata Europe (SGTE) and then the binary and ternary systems. All phases present in those binary and ternary systems are described by means of the Gibbs energy dependence on composition and temperature. Many of those binary systems have been taken from published or unpublished works and others have been assessed in the present work. All the calculations have been made using Thermo C alc software and the representation of the Gibbs energy obtained by applying Calphad technique

  13. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  14. Database for environmental monitoring at nuclear facilities

    International Nuclear Information System (INIS)

    Raceanu, M.; Varlam, C.; Enache, A.; Faurescu, I.

    2006-01-01

    To ensure that an assessment could be made of the impact of nuclear facilities on the local environment, a program of environmental monitoring must be established well in advance of nuclear facilities operation. Enormous amount of data must be stored and correlated starting with: location, meteorology, type sample characterization from water to different kind of food, radioactivity measurement and isotopic measurement (e.g. for C-14 determination, C-13 isotopic correction it is a must). Data modelling is a well known mechanism describing data structures at a high level of abstraction. Such models are often used to automatically create database structures, and to generate code structures used to access databases. This has the disadvantage of losing data constraints that might be specified in data models for data checking. Embodiment of the system of the present application includes a computer-readable memory for storing a definitional data table for defining variable symbols representing respective measurable physical phenomena. The definitional data table uniquely defines the variable symbols by relating them to respective data domains for the respective phenomena represented by the symbols. Well established rules of how the data should be stored and accessed, are given in the Relational Database Theory. The theory comprise of guidelines such as the avoidance of duplicating data using technique call normalization and how to identify the unique identifier for a database record. (author)

  15. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  16. Exploration of reliability databases and comparison of former IFMIF's results

    International Nuclear Information System (INIS)

    Tapia, Carlos; Dies, Javier; Abal, Javier; Ibarra, Angel; Arroyo, Jose M.

    2011-01-01

    There is an uncertainty issue about the applicability of industrial databases to new designs, such as the International Fusion Materials Irradiation Facility (IFMIF), as they usually contain elements for which no historical statistics exist. The exploration of common components reliability data in Accelerator Driven Systems (ADS) and Liquid Metal Technologies (LMT) frameworks is the milestone to analyze the data used in IFMIF reliability's reports and for future studies. The comparison between the reliability accelerator results given in the former IFMIF's reports and the databases explored has been made by means of a new accelerator Reliability, Availability, Maintainability (RAM) analysis. The reliability database used in this analysis is traceable.

  17. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  18. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  19. International Nuclear Safety Center database on thermophysical properties of reactor materials

    International Nuclear Information System (INIS)

    Fink, J.K.; Sofu, T.; Ley, H.

    1997-01-01

    The International Nuclear Safety Center (INSC) database has been established at Argonne National Laboratory to provide easily accessible data and information necessary to perform nuclear safety analyses and to promote international collaboration through the exchange of nuclear safety information. The INSC database, located on the World Wide Web at http://www.insc.anl.gov, contains critically assessed recommendations for reactor material properties for normal operating conditions, transients, and severe accidents. The initial focus of the database is on thermodynamic and transport properties of materials for water reactors. Materials that are being included in the database are fuel, absorbers, cladding, structural materials, coolant, and liquid mixtures of combinations of UO 2 , ZrO 2 , Zr, stainless steel, absorber materials, and concrete. For each property, the database includes: (1) a summary of recommended equations with uncertainties; (2) a detailed data assessment giving the basis for the recommendations, comparisons with experimental data and previous recommendations, and uncertainties; (3) graphs showing recommendations, uncertainties, and comparisons with data and other equations; and (4) property values tabulated as a function of temperature

  20. Advanced SPARQL querying in small molecule databases.

    Science.gov (United States)

    Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří

    2016-01-01

    In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.

  1. Optimization and Accessibility of the Qweak Database

    Science.gov (United States)

    Urban, Erik; Spayde, Damon

    2010-11-01

    The Qweak experiment is a multi-institutional collaborative effort at Thomas Jefferson National Accelerator Facility designed to accurately determine the weak nuclear charge of a proton through measurements of the parity violating asymmetries of electron-proton elastic scattering that result from pulses of electrons with opposite helicities. Through the study of these scattering asymmetries, the Qweak experiment hopes to constrain extensions of the Standard Model or find indications of new physics. Since precision is critical to the success of the Qweak experiment, the collaboration will be taking data for thousands of hours. The Qweak database is responsible for storing the non-binary, processed data of this experiment in a meaningful and organized manner for use at a later date. The goal of this undertaking to not only create a database which can input and output data quickly, but create one which can easily be accessed by those who have minimal knowledge of the database language. Through tests on the system, the speed of retrieval and insert times has been optimized and, in addition, the implementation of summary tables and additional programs should make the majority of commonly sought results readily available to database novices.

  2. UbiProt: a database of ubiquitylated proteins

    Directory of Open Access Journals (Sweden)

    Kondratieva Ekaterina V

    2007-04-01

    Full Text Available Abstract Background Post-translational protein modification with ubiquitin, or ubiquitylation, is one of the hottest topics in a modern biology due to a dramatic impact on diverse metabolic pathways and involvement in pathogenesis of severe human diseases. A great number of eukaryotic proteins was found to be ubiquitylated. However, data about particular ubiquitylated proteins are rather disembodied. Description To fill a general need for collecting and systematizing experimental data concerning ubiquitylation we have developed a new resource, UbiProt Database, a knowledgebase of ubiquitylated proteins. The database contains retrievable information about overall characteristics of a particular protein, ubiquitylation features, related ubiquitylation and de-ubiquitylation machinery and literature references reflecting experimental evidence of ubiquitylation. UbiProt is available at http://ubiprot.org.ru for free. Conclusion UbiProt Database is a public resource offering comprehensive information on ubiquitylated proteins. The resource can serve as a general reference source both for researchers in ubiquitin field and those who deal with particular ubiquitylated proteins which are of their interest. Further development of the UbiProt Database is expected to be of common interest for research groups involved in studies of the ubiquitin system.

  3. Databases in the Central Government : State-of-the-art and the Future

    Science.gov (United States)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  4. IAEA Illicit Trafficking Database (ITDB)

    International Nuclear Information System (INIS)

    2010-01-01

    The IAEA Illicit Trafficking Database (ITDB) was established in 1995 as a unique network of points of contact connecting 100 states and several international organizations. Information collected from official sources supplemented by open-source reports. The 1994 - GC 38, resolution intensifies the activities through which the Agency is currently supporting Member States in this field. Member states were notified of completed database in 1995 and invited to participate. The purpose of the I TDB is to facilitate exchange of authoritative information among States on incidents of illicit trafficking and other related unauthorized activities involving nuclear and other radioactive materials; to collect, maintain and analyse information on such incidents with a view to identifying common threats, trends, and patterns; use this information for internal planning and prioritisation and provide this information to member states and to provide a reliable source of basic information on such incidents to the media, when appropriate

  5. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  6. TOPDOM: database of conservatively located domains and motifs in proteins.

    Science.gov (United States)

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. Development of helium isotopic database in Japan

    International Nuclear Information System (INIS)

    Kusano, Tomohiro; Asamori, Koichi; Umeda, Koji

    2012-09-01

    We constructed “Helium Isotopic Database in Japan”, which includes isotope ratios of noble gases and chemical compositions of gas samples collected from hot springs and drinking water wells. The helium isotopes are excellent natural tracers for indicating the presence of mantle derived volatiles, because they are chemically inert and thus conserved in crustal rock-water systems. It is common knowledge that mantle degassing does not occur homogeneously over the Earth's surface. The 3 He/ 4 He ratios higher than the typical crustal values are interpreted to indicate that transfer of mantle volatiles into the crust by processes or mechanisms such as magmatic intrusion, faulting. In particular the spatial variation of helium isotope ratios could provide a valuable information to identify volcanic regions and tectonically active areas. The database was compiled geochemical data of hot spring gas etc. from 108 published papers. As a result of the data compiling, the database has 1728 helium isotopic data. A CD-ROM is attached as an appendix. (author)

  8. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  9. Statistical properties of the normalized ice particle size distribution

    Science.gov (United States)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000

  10. Advanced Neuropsychological Diagnostics Infrastructure (ANDI: A Normative Database Created from Control Datasets.

    Directory of Open Access Journals (Sweden)

    Nathalie R. de Vent

    2016-10-01

    Full Text Available In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI, datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  11. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  12. The Danish National Database for Asthma

    DEFF Research Database (Denmark)

    Backer, Vibeke; Lykkegaard, Jesper; Bodtger, Uffe

    2016-01-01

    AIM OF THE DATABASE: Asthma is the most prevalent chronic disease in children, adolescents, and young adults. In Denmark (with a population of 5.6 million citizens), >400,000 persons are prescribed antiasthmatic medication annually. However, undiagnosed cases, dubious diagnoses, and poor asthma...... management are probably common. The Danish National Database for Asthma (DNDA) was established in 2015. The aim of the DNDA was to collect the data on all patients treated for asthma in Denmark and to monitor asthma occurrence, the quality of diagnosis, and management. STUDY POPULATION: Persons above the age...... year, the inclusion criteria are a second purchase of asthma prescription medicine within a 2-year period (National Prescription Registry) or a diagnosis of asthma (National Patient Register). Patients with chronic obstructive pulmonary disease are excluded, but smokers are not excluded. DESCRIPTIVE...

  13. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  14. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  15. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  16. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  17. Development of a prototype commonality analysis tool for use in space programs

    Science.gov (United States)

    Yeager, Dorian P.

    1988-01-01

    A software tool to aid in performing commonality analyses, called Commonality Analysis Problem Solver (CAPS), was designed, and a prototype version (CAPS 1.0) was implemented and tested. The CAPS 1.0 runs in an MS-DOS or IBM PC-DOS environment. The CAPS is designed around a simple input language which provides a natural syntax for the description of feasibility constraints. It provides its users with the ability to load a database representing a set of design items, describe the feasibility constraints on items in that database, and do a comprehensive cost analysis to find the most economical substitution pattern.

  18. Armada: a reference model for an evolving database system

    NARCIS (Netherlands)

    F.E. Groffen (Fabian); M.L. Kersten (Martin); S. Manegold (Stefan)

    2006-01-01

    textabstractThe current database deployment palette ranges from networked sensor-based devices to large data/compute Grids. Both extremes present common challenges for distributed DBMS technology. The local storage per device/node/site is severely limited compared to the total data volume being

  19. Completeness and data validity for the Danish Fracture Database

    DEFF Research Database (Denmark)

    Gromov, Kirill; Fristed, Jakob V; Brix, Michael

    2013-01-01

    Fracture-related surgery is among the most common orthopaedic procedures. However, to our knowledge, register-based quality assessment of fracture-related surgery has not previously been conducted. The Danish Fracture Database (DFDB) has been developed for the purpose of web-based quality...

  20. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  1. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  2. The EPIC nutrient database project (ENDB): a first attempt to standardize nutrient databases across the 10 European countries participating in the EPIC study

    DEFF Research Database (Denmark)

    Slimani, N.; Deharveng, G.; Unwin, I.

    2007-01-01

    because there is currently no European reference NDB available. Design: A large network involving national compilers, nutritionists and experts on food chemistry and computer science was set up for the 'EPIC Nutrient DataBase' ( ENDB) project. A total of 550-1500 foods derived from about 37 000...... standardized EPIC 24-h dietary recalls (24-HDRS) were matched as closely as possible to foods available in the 10 national NDBs. The resulting national data sets ( NDS) were then successively documented, standardized and evaluated according to common guidelines and using a DataBase Management System...

  3. License - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available s Database Last updated : 2011/11/08 You may use this database in compliance with the terms and conditions o...f the license described below. The license specifies the license terms regarding the use of this database an...d the requirements you must follow in using this database. The license for this database... is specified in the Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database..., please be sure attribute this database as follows: The Rice Growth Monitoring for the Phenotypic Fu

  4. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    Science.gov (United States)

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  5. TMC-SNPdb: an Indian germline variant database derived from whole exome sequences.

    Science.gov (United States)

    Upadhyay, Pawan; Gardi, Nilesh; Desai, Sanket; Sahoo, Bikram; Singh, Ankita; Togar, Trupti; Iyer, Prajish; Prasad, Ratnam; Chandrani, Pratik; Gupta, Sudeep; Dutt, Amit

    2016-01-01

    Cancer is predominantly a somatic disease. A mutant allele present in a cancer cell genome is considered somatic when it's absent in the paired normal genome along with public SNP databases. The current build of dbSNP, the most comprehensive public SNP database, however inadequately represents several non-European Caucasian populations, posing a limitation in cancer genomic analyses of data from these populations. We present the T: ata M: emorial C: entre-SNP D: ata B: ase (TMC-SNPdb), as the first open source, flexible, upgradable, and freely available SNP database (accessible through dbSNP build 149 and ANNOVAR)-representing 114 309 unique germline variants-generated from whole exome data of 62 normal samples derived from cancer patients of Indian origin. The TMC-SNPdb is presented with a companion subtraction tool that can be executed with command line option or using an easy-to-use graphical user interface with the ability to deplete additional Indian population specific SNPs over and above dbSNP and 1000 Genomes databases. Using an institutional generated whole exome data set of 132 samples of Indian origin, we demonstrate that TMC-SNPdb could deplete 42, 33 and 28% false positive somatic events post dbSNP depletion in Indian origin tongue, gallbladder, and cervical cancer samples, respectively. Beyond cancer somatic analyses, we anticipate utility of the TMC-SNPdb in several Mendelian germline diseases. In addition to dbSNP build 149 and ANNOVAR, the TMC-SNPdb along with the subtraction tool is available for download in the public domain at the following:Database URL: http://www.actrec.gov.in/pi-webpages/AmitDutt/TMCSNP/TMCSNPdp.html. © The Author(s) 2016. Published by Oxford University Press.

  6. Parenting and later substance use among Mexican-origin youth: Moderation by preference for a common language.

    Science.gov (United States)

    Schofield, Thomas J; Toro, Rosa I; Parke, Ross D; Cookston, Jeffrey T; Fabricius, William V; Coltrane, Scott

    2017-04-01

    The primary goal of the current study was to test whether parent and adolescent preference for a common language moderates the association between parenting and rank-order change over time in offspring substance use. A sample of Mexican-origin 7th-grade adolescents (Mage = 12.5 years, N = 194, 52% female) was measured longitudinally on use of tobacco, alcohol, and marijuana. Mothers, fathers, and adolescents all reported on consistent discipline and monitoring of adolescents. Both consistent discipline and monitoring predicted relative decreases in substance use into early adulthood but only among parent-offspring dyads who expressed preference for the same language (either English or Spanish). This moderation held after controlling for parent substance use, family structure, having completed schooling in Mexico, years lived in the United States, family income, and cultural values. An unintended consequence of the immigration process may be the loss of parenting effectiveness that is normally present when parents and adolescents prefer to communicate in a common language. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. The reactive metabolite target protein database (TPDB)--a web-accessible resource.

    Science.gov (United States)

    Hanzlik, Robert P; Koen, Yakov M; Theertham, Bhargav; Dong, Yinghua; Fang, Jianwen

    2007-03-16

    The toxic effects of many simple organic compounds stem from their biotransformation to chemically reactive metabolites which bind covalently to cellular proteins. To understand the mechanisms of cytotoxic responses it may be important to know which proteins become adducted and whether some may be common targets of multiple toxins. The literature of this field is widely scattered but expanding rapidly, suggesting the need for a comprehensive, searchable database of reactive metabolite target proteins. The Reactive Metabolite Target Protein Database (TPDB) is a comprehensive, curated, searchable, documented compilation of publicly available information on the protein targets of reactive metabolites of 18 well-studied chemicals and drugs of known toxicity. TPDB software enables i) string searches for author names and proteins names/synonyms, ii) more complex searches by selecting chemical compound, animal species, target tissue and protein names/synonyms from pull-down menus, and iii) commonality searches over multiple chemicals. Tabulated search results provide information, references and links to other databases. The TPDB is a unique on-line compilation of information on the covalent modification of cellular proteins by reactive metabolites of chemicals and drugs. Its comprehensiveness and searchability should facilitate the elucidation of mechanisms of reactive metabolite toxicity. The database is freely available at http://tpdb.medchem.ku.edu/tpdb.html.

  8. Failure and Maintenance Analysis Using Web-Based Reliability Database System

    International Nuclear Information System (INIS)

    Hwang, Seok Won; Kim, Myoung Su; Seong, Ki Yeoul; Na, Jang Hwan; Jerng, Dong Wook

    2007-01-01

    Korea Hydro and Nuclear Power Company has lunched the development of a database system for PSA and Maintenance Rule implementation. It focuses on the easy processing of raw data into a credible and useful database for the risk-informed environment of nuclear power plant operation and maintenance. Even though KHNP had recently completed the PSA for all domestic NPPs as a requirement of the severe accident mitigation strategy, the component failure data were only gathered as a means of quantification purposes for the relevant project. So, the data were not efficient enough for the Living PSA or other generic purposes. Another reason to build a real time database is for the newly adopted Maintenance Rule, which requests the utility to continuously monitor the plant risk based on its operation and maintenance performance. Furthermore, as one of the pre-condition for the Risk Informed Regulation and Application, the nuclear regulatory agency of Korea requests the development and management of domestic database system. KHNP is stacking up data of operation and maintenance on the Enterprise Resource Planning (ERP) system since its first opening on July, 2003. But, so far a systematic review has not been performed to apply the component failure and maintenance history for PSA and other reliability analysis. The data stored in PUMAS before the ERP system is introduced also need to be converted and managed into the new database structure and methodology. This reliability database system is a web-based interface on a UNIX server with Oracle relational database. It is designed to be applicable for all domestic NPPs with a common database structure and the web interfaces, therefore additional program development would not be necessary for data acquisition and processing in the near future. Categorization standards for systems and components have been implemented to analyze all domestic NPPs. For example, SysCode (for a system code) and CpCode (for a component code) were newly

  9. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  10. Bifactor model of WISC-IV: Applicability and measurement invariance in low and normal IQ groups.

    Science.gov (United States)

    Gomez, Rapson; Vance, Alasdair; Watson, Shaun

    2017-07-01

    This study examined the applicability and measurement invariance of the bifactor model of the 10 Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) core subtests in groups of children and adolescents (age range from 6 to 16 years) with low (IQ ≤79; N = 229; % male = 75.9) and normal (IQ ≥80; N = 816; % male = 75.0) IQ scores. Results supported this model in both groups, and there was good support for measurement invariance for this model across these groups. For all participants together, the omega hierarchical and explained common variance (ECV) values were high for the general factor and low to negligible for the specific factors. Together, the findings favor the use of the Full Scale IQ (FSIQ) scores of the WISC-IV, but not the subscale index scores. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  12. Low-Dose Hyper-Radiosensitivity Is Not a Common Effect in Normal Asynchronous and G2-Phase Fibroblasts of Cancer Patients

    International Nuclear Information System (INIS)

    Słonina, Dorota; Biesaga, Beata; Janecka, Anna; Kabat, Damian; Bukowska-Strakova, Karolina; Gasińska, Anna

    2014-01-01

    Purpose: In our previous study, using the micronucleus assay, a low-dose hyper-radiosensitivity (HRS)-like phenomenon was observed for normal fibroblasts of 2 of the 40 cancer patients investigated. In this article we report, for the first time, the survival response of primary fibroblasts from 25 of these patients to low-dose irradiation and answer the question regarding the effect of G2-phase enrichment on HRS elicitation. Methods and Materials: The clonogenic survival of asynchronous as well as G2-phase enriched fibroblast populations was measured. Separation of G2-phase cells and precise cell counting was performed using a fluorescence-activated cell sorter. Sorted and plated cells were irradiated with single doses (0.1-4 Gy) of 6-MV x-rays. For each patient, at least 4 independent experiments were performed, and the induced-repair model was fitted over the whole data set to confirm the presence of HRS effect. Results: The HRS response was demonstrated for the asynchronous and G2-phase enriched cell populations of 4 patients. For the rest of patients, HRS was not defined in either of the 2 fibroblast populations. Thus, G2-phase enrichment had no effect on HRS elicitation. Conclusions: The fact that low-dose hyper-radiosensitivity is not a common effect in normal human fibroblasts implies that HRS may be of little consequence in late-responding connective tissues with regard to radiation fibrosis

  13. The IARC TP53 mutation database: a resource for studying the significance of TP53 mutations in human cancers

    Directory of Open Access Journals (Sweden)

    Magali Olivier

    2007-02-01

    yeast and human cells to measure the impact of these mutations on various protein properties: (1 transactivation activities (TA of mutant proteins on reporter genes placed under the control of various p53 responseelements, (2 capacity of mutant proteins to induce cellcycle arrest or apoptosis, (3 ability to exert dominantnegative effect (DNE over the wild-type protein, (4 activities of mutant proteins that are independent and unrelated to the wild-type protein (gain of function, GOF. Prediction models based on interspecies protein sequence conservation have also been developed to predict the functional impact of all possible single amino-acid substitutions.

    Normal">These data have been used to produce systematic functional classifications of mutant proteins and these classifications have been integrated in the IARC TP53 database. New tools have been implemented to visualize these data and analyze mutation frequencies in relation to their functional impact and intrinsic nucleotide substitution rates.

    Normal">Thus, the database allows systematic analyses of the factors that shape the patterns and influence the phenotype of missense mutations in human cancers. In a recent analysis of the database, we showed that that loss of TA capacity is a key factor for the selection of missense mutations, and that difference in mutation frequencies is closely related to nucleotide substitution rates along TP53 coding sequence. TA capacity of inherited missense mutations was also found to be related the age at onset of specific tumor types, mutations with total loss of TA being associated with earlier cancer onset cancers compared to mutations that retain partial trans-activation capacity. Furthermore, 80% of the most common mutants show a capacity to exert dominant-negative effect (DNE over wildtype p53, compared to only 45% of

  14. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  15. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  16. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  17. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  18. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  19. Common pitfalls in radiographic interpretation of the Thorax

    International Nuclear Information System (INIS)

    Godshalk, C.P.

    1994-01-01

    Errors in radiographic interpretation of the thorax are common. Many mistakes result from interpreting normal anatomic variants as abnormalstructures, such as misdiagnosing dorsal and rightward deviation of the cranial thoracic trachea on lateral radiographs of normal dogs. Some of the more common errors specifically relate to misinterpretation of radiographs made on obese patients. The age of the patient also plays a role in misdiagnosis. Aging cats seem to have a horizontally positioned heart on lateral radiographs, and older dogs, primarily collies,often have pulmonary osteomas that are misdiagnosed as metastatic neoplastic disease or healed pulmonary fungal infections

  20. Validation of a for anaerobic bacteria optimized MALDI-TOF MS biotyper database: The ENRIA project.

    Science.gov (United States)

    Veloo, A C M; Jean-Pierre, H; Justesen, U S; Morris, T; Urban, E; Wybo, I; Kostrzewa, M; Friedrich, A W

    2018-03-12

    Within the ENRIA project, several 'expertise laboratories' collaborated in order to optimize the identification of clinical anaerobic isolates by using a widely available platform, the Biotyper Matrix Assisted Laser Desorption Ionization Time-of-Flight Mass Spectrometry (MALDI-TOF MS). Main Spectral Profiles (MSPs) of well characterized anaerobic strains were added to one of the latest updates of the Biotyper database db6903; (V6 database) for common use. MSPs of anaerobic strains nominated for addition to the Biotyper database are included in this validation. In this study, we validated the optimized database (db5989 [V5 database] + ENRIA MSPs) using 6309 anaerobic isolates. Using the V5 database 71.1% of the isolates could be identified with high confidence, 16.9% with low confidence and 12.0% could not be identified. Including the MSPs added to the V6 database and all MSPs created within the ENRIA project, the amount of strains identified with high confidence increased to 74.8% and 79.2%, respectively. Strains that could not be identified using MALDI-TOF MS decreased to 10.4% and 7.3%, respectively. The observed increase in high confidence identifications differed per genus. For Bilophila wadsworthia, Prevotella spp., gram-positive anaerobic cocci and other less commonly encountered species more strains were identified with higher confidence. A subset of the non-identified strains (42.1%) were identified using 16S rDNA gene sequencing. The obtained identities demonstrated that strains could not be identified either due to the generation of spectra of insufficient quality or due to the fact that no MSP of the encountered species was present in the database. Undoubtedly, the ENRIA project has successfully increased the number of anaerobic isolates that can be identified with high confidence. We therefore recommend further expansion of the database to include less frequently isolated species as this would also allow us to gain valuable insight into the clinical

  1. An Approach to Acquiring, Normalizing, and Managing EHR Data From a Clinical Data Repository for Studying Pressure Ulcer Outcomes.

    Science.gov (United States)

    Padula, William V; Blackshaw, Leon; Brindle, C Tod; Volchenboum, Samuel L

    2016-01-01

    Changes in the methods that individual facilities follow to collect and store data related to hospital-acquired pressure ulcer (HAPU) occurrences are essential for improving patient outcomes and advancing our understanding the science behind this clinically relevant issue. Using an established electronic health record system at a large, urban, tertiary-care academic medical center, we investigated the process required for taking raw data of HAPU outcomes and submitting these data to a normalization process. We extracted data from 1.5 million patient shifts and filtered observations to those with a Braden score and linked tables in the electronic health record, including (1) Braden scale scores, (2) laboratory outcomes data, (3) surgical time, (4) provider orders, (5) medications, and (6) discharge diagnoses. Braden scores are important measures specific to HAPUs since these scores clarify the daily risk of a hospitalized patient for developing a pressure ulcer. The other more common measures that may be associated with HAPU outcomes are important to organize in a single data frame with Braden scores according to each patient. Primary keys were assigned to each table, and the data were processed through 3 normalization steps and 1 denormalization step. These processes created 8 tables that can be stored efficiently in a clinical database of HAPU outcomes. As hospitals focus on organizing data for review of HAPUs and other types of hospital-acquired conditions, the normalization process we describe in this article offers directions for collaboration between providers and informatics teams using a common language and structure.

  2. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  3. Chromophobe Renal Cell Carcinoma is the Most Common Nonclear Renal Cell Carcinoma in Young Women: Results from the SEER Database.

    Science.gov (United States)

    Daugherty, Michael; Blakely, Stephen; Shapiro, Oleg; Vourganti, Srinivas; Mollapour, Mehdi; Bratslavsky, Gennady

    2016-04-01

    The renal cell cancer incidence is relatively low in younger patients, encompassing 3% to 7% of all renal cell cancers. While young patients may have renal tumors due to hereditary syndromes, in some of them sporadic renal cancers develop without any family history or known genetic mutations. Our recent observations from clinical practice have led us to hypothesize that there is a difference in histological distribution in younger patients compared to the older cohort. We queried the SEER (Surveillance, Epidemiology and End Results) 18-registry database for all patients 20 years old or older who were surgically treated for renal cell carcinoma between 2001 and 2008. Patients with unknown race, grade, stage or histology and those with multiple tumors were excluded from study. Four cohorts were created by dividing patients by gender, including 1,202 females and 1,715 males younger than 40 years old, and 18,353 females and 30,891 males 40 years old or older. Chi-square analysis was used to compare histological distributions between the cohorts. While clear cell carcinoma was still the most common renal cell cancer subtype across all genders and ages, chromophobe renal cell cancer was the most predominant type of nonclear renal cell cancer histology in young females, representing 62.3% of all nonclear cell renal cell cancers (p renal cell cancer remained the most common type of nonclear renal cell cancer. It is possible that hormonal factors or specific pathway dysregulations predispose chromophobe renal cell cancer to develop in younger women. We hope that this work provides some new observations that could lead to further studies of gender and histology specific renal tumorigenesis. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. Early results and future challenges of the Danish Fracture Database

    DEFF Research Database (Denmark)

    Gromov, K.; Brix, Michael; Kallemose, T.

    2014-01-01

    INTRODUCTION: The Danish Fracture Database (DFDB) was established in 2011 to establish nationwide prospective quality assessment of all fracture-related surgery. In this paper, we describe the DFDB's setup, present preliminary data from the first annual report and discuss its future potential...... of osteosynthesis were the three most common indications for reoperation and accounted for 34%, 14% and 13%, respectively. CONCLUSION: The DFDB is an online database for registration of fracture-related surgery that allows for basic quality assessment of surgical fracture treatment and large-scale observational...

  5. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  6. Database tools for enhanced analysis of TMX-U data. Revision 1

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  7. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  8. Analysis of prescription database extracted from standard textbooks of traditional Dai medicine.

    Science.gov (United States)

    Zhang, Chuang; Chongsuvivatwong, Virasakdi; Keawpradub, Niwat; Lin, Yanfang

    2012-08-29

    Traditional Dai Medicine (TDM) is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research.

  9. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  10. OxDBase: a database of oxygenases involved in biodegradation

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2009-04-01

    Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.

  11. Exploiting relational database technology in a GIS

    Science.gov (United States)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  12. Mining Bug Databases for Unidentified Software Vulnerabilities

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Jason Wright; Miles McQueen

    2012-06-01

    Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, a significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. Are common names becoming less common? The rise in uniqueness and individualism in Japan.

    Science.gov (United States)

    Ogihara, Yuji; Fujita, Hiroyo; Tominaga, Hitoshi; Ishigaki, Sho; Kashimoto, Takuya; Takahashi, Ayano; Toyohara, Kyoko; Uchida, Yukiko

    2015-01-01

    We examined whether Japanese culture has become more individualistic by investigating how the practice of naming babies has changed over time. Cultural psychology has revealed substantial cultural variation in human psychology and behavior, emphasizing the mutual construction of socio-cultural environment and mind. However, much of the past research did not account for the fact that culture is changing. Indeed, archival data on behavior (e.g., divorce rates) suggest a rise in individualism in the U.S. and Japan. In addition to archival data, cultural products (which express an individual's psyche and behavior outside the head; e.g., advertising) can also reveal cultural change. However, little research has investigated the changes in individualism in East Asia using cultural products. To reveal the dynamic aspects of culture, it is important to present temporal data across cultures. In this study, we examined baby names as a cultural product. If Japanese culture has become more individualistic, parents would be expected to give their children unique names. Using two databases, we calculated the rate of popular baby names between 2004 and 2013. Both databases released the rankings of popular names and their rates within the sample. As Japanese names are generally comprised of both written Chinese characters and their pronunciations, we analyzed these two separately. We found that the rate of popular Chinese characters increased, whereas the rate of popular pronunciations decreased. However, only the rate of popular pronunciations was associated with a previously validated collectivism index. Moreover, we examined the pronunciation variation of common combinations of Chinese characters and the written form variation of common pronunciations. We found that the variation of written forms decreased, whereas the variation of pronunciations increased over time. Taken together, these results showed that parents are giving their children unique names by pairing common

  15. Computed tomography of the normal appendix and acute appendicitis

    International Nuclear Information System (INIS)

    Ghiatas, A.A.; Chopra, S.; Chintapalli, K.N.; Esola, C.C.; Daskalogiannaki, M.; Dodd, G.D. III; Gourtsoyiannis, N.

    1997-01-01

    The aim of this article is to present pictorially the spectrum of appearances of the appendix and appendicitis on CT. The images presented were selected from the database of our hospitals. The various appearances of the normal appendix on CT are shown. Appendicitis can be divided into four categories on the basis of CT findings. Examples of each category are shown. (orig.). With 14 figs

  16. Shoulder Ultrasonography: Performance and Common Findings

    Directory of Open Access Journals (Sweden)

    Diana Gaitini

    2012-01-01

    Full Text Available Ultrasound (US of the shoulder is the most commonly requested examination in musculoskeletal US diagnosis. Sports injuries and degenerative and inflammatory processes are the main sources of shoulder pain and functional limitations. Because of its availability, low cost, dynamic examination process, absence of radiation exposure, and ease of patient compliance, US is the preferred mode for shoulder imaging over other, more sophisticated, and expensive methods. Operator dependence is the main disadvantage of US examinations. Use of high range equipment with high resolution transducers, adhering to a strict examination protocol, good knowledge of normal anatomy and pathological processes and an awareness of common pitfalls are essential for the optimal performance and interpretation of shoulder US. This article addresses examination techniques, the normal sonographic appearance of tendons, bursae and joints, and the main pathological conditions found in shoulder ultrasonography.

  17. Expression-robust 3D face recognition via weighted sparse representation of multi-scale and multi-component local normal patterns

    KAUST Repository

    Li, Huibin

    2014-06-01

    In the theory of differential geometry, surface normal, as a first order surface differential quantity, determines the orientation of a surface at each point and contains informative local surface shape information. To fully exploit this kind of information for 3D face recognition (FR), this paper proposes a novel highly discriminative facial shape descriptor, namely multi-scale and multi-component local normal patterns (MSMC-LNP). Given a normalized facial range image, three components of normal vectors are first estimated, leading to three normal component images. Then, each normal component image is encoded locally to local normal patterns (LNP) on different scales. To utilize spatial information of facial shape, each normal component image is divided into several patches, and their LNP histograms are computed and concatenated according to the facial configuration. Finally, each original facial surface is represented by a set of LNP histograms including both global and local cues. Moreover, to make the proposed solution robust to the variations of facial expressions, we propose to learn the weight of each local patch on a given encoding scale and normal component image. Based on the learned weights and the weighted LNP histograms, we formulate a weighted sparse representation-based classifier (W-SRC). In contrast to the overwhelming majority of 3D FR approaches which were only benchmarked on the FRGC v2.0 database, we carried out extensive experiments on the FRGC v2.0, Bosphorus, BU-3DFE and 3D-TEC databases, thus including 3D face data captured in different scenarios through various sensors and depicting in particular different challenges with respect to facial expressions. The experimental results show that the proposed approach consistently achieves competitive rank-one recognition rates on these databases despite their heterogeneous nature, and thereby demonstrates its effectiveness and its generalizability. © 2014 Elsevier B.V.

  18. On transferability and contexts when using simulated grasp databases

    DEFF Research Database (Denmark)

    Jørgensen, Jimmy Alison; Ellekilde, Lars-Peter; Kraft, Dirk

    2015-01-01

    It has become a common practice to use simulation to generate large databases of good grasps for grasp planning in robotics research. However, the existence of a generic simulation context that enables the generation of high quality grasps that can be used in several different contexts such as bi...

  19. The normalization heuristic: an untested hypothesis that may misguide medical decisions.

    Science.gov (United States)

    Aberegg, Scott K; O'Brien, James M

    2009-06-01

    Medical practice is increasingly informed by the evidence from randomized controlled trials. When such evidence is not available, clinical hypotheses based on pathophysiological reasoning and common sense guide clinical decision making. One commonly utilized general clinical hypothesis is the assumption that normalizing abnormal laboratory values and physiological parameters will lead to improved patient outcomes. We refer to the general use of this clinical hypothesis to guide medical therapeutics as the "normalization heuristic". In this paper, we operationally define this heuristic and discuss its limitations as a rule of thumb for clinical decision making. We review historical and contemporaneous examples of normalization practices as empirical evidence for the normalization heuristic and to highlight its frailty as a guide for clinical decision making.

  20. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  1. PACC information management code for common cause failures analysis

    International Nuclear Information System (INIS)

    Ortega Prieto, P.; Garcia Gay, J.; Mira McWilliams, J.

    1987-01-01

    The purpose of this paper is to present the PACC code, which, through an adequate data management, makes the task of computerized common-mode failure analysis easier. PACC processes and generates information in order to carry out the corresponding qualitative analysis, by means of the boolean technique of transformation of variables, and the quantitative analysis either using one of several parametric methods or a direct data-base. As far as the qualitative analysis is concerned, the code creates several functional forms for the transformation equations according to the user's choice. These equations are subsequently processed by boolean manipulation codes, such as SETS. The quantitative calculations of the code can be carried out in two different ways: either starting from a common cause data-base, or through parametric methods, such as the Binomial Failure Rate Method, the Basic Parameters Method or the Multiple Greek Letter Method, among others. (orig.)

  2. Chess databases as a research vehicle in psychology: Modeling large data.

    Science.gov (United States)

    Vaci, Nemanja; Bilalić, Merim

    2017-08-01

    The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.

  3. Building an integrated neurodegenerative disease database at an academic health center.

    Science.gov (United States)

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  4. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  5. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  6. Research and Implementation of Distributed Database HBase Monitoring System

    Directory of Open Access Journals (Sweden)

    Guo Lisi

    2017-01-01

    Full Text Available With the arrival of large data age, distributed database HBase becomes an important tool for storing data in massive data age. The normal operation of HBase database is an important guarantee to ensure the security of data storage. Therefore designing a reasonable HBase monitoring system is of great significance in practice. In this article, we introduce the solution, which contains the performance monitoring and fault alarm function module, to meet a certain operator’s demand of HBase monitoring database in their actual production projects. We designed a monitoring system which consists of a flexible and extensible monitoring agent, a monitoring server based on SSM architecture, and a concise monitoring display layer. Moreover, in order to deal with the problem that pages renders too slow in the actual operation process, we present a solution: reducing the SQL query. It has been proved that reducing SQL query can effectively improve system performance and user experience. The system work well in monitoring the status of HBase database, flexibly extending the monitoring index, and issuing a warning when a fault occurs, so that it is able to improve the working efficiency of the administrator, and ensure the smooth operation of the project.

  7. Soil Properties Database of Spanish Soils Volume IV.- Valencia and Murcia

    International Nuclear Information System (INIS)

    Trueba, C.; Millan, R.; Schmid, T.; Roquero, C; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidades Autonomas de Valencia and Murcia. (Author) 63 refs

  8. Precaval retropancreatic space: Normal anatomy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeon Hee; Kim, Ki Whang; Kim, Myung Jin; Yoo, Hyung Sik; Lee, Jong Tae [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1992-07-15

    The authors defined precaval retropancreatic space as the space between pancreatic head with portal vein and IVC and analyzed the CT findings of this space to know the normal structures and size in this space. We evaluated 100 cases of normal abdominal CT scan to find out normal anatomic structures of precaval retropancreatic space retrospectively. We also measured the distance between these structures and calculated the minimum, maximum and mean values. At the splenoportal confluence level, normal structures between portal vein and IVC were vessel (21%), lymph node (19%), and caudate lobe of liver (2%) in order of frequency. The maximum AP diameter of portocaval lymph node was 4 mm. Common bile duct (CBD) was seen in 44% and the diameter was mean 3 mm and maximum 11 mm. CBD was located in extrapancreatic (75%) and lateral (60.6%) to pancreatic head. At IVC-left renal vein level, the maximum distance between CBD and IVC was 5 mm and the structure between posterior pancreatic surface and IVC was only fat tissue. Knowledge of these normal structures and measurement will be helpful in differentiating pancreatic mass with retropancreatic mass such as lymphadenopathy.

  9. Database for 238U inelastic scattering cross section evaluation

    International Nuclear Information System (INIS)

    Kanda, Yukinori; Fujikawa, Noboru; Kawano, Toshihiko

    1993-10-01

    There are discrepancies among evaluated neutron inelastic scattering cross sections for 238 U in the evaluated nuclear data files, JENDL-3, ENDF/B-VI, JEF-2, BROND-2 and CENDL-2. Re-evaluating them is internationally being discussed to obtain the best outcome which can be accepted in common at the present by experts in the world. This report has been compiled to review the discrepancies among the evaluations in the present data files and to provide a common database for the re-evaluation work (author)

  10. Improving the Discoverability and Availability of Sample Data and Imagery in NASA's Astromaterials Curation Digital Repository Using a New Common Architecture for Sample Databases

    Science.gov (United States)

    Todd, N. S.; Evans, C.

    2015-01-01

    The Astromaterials Acquisition and Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. The suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from the Japanese Hayabusa mission, and solar wind atoms collected during the Genesis mission. To support planetary science research on these samples, NASA's Astromaterials Curation Office hosts the Astromaterials Curation Digital Repository, which provides descriptions of the missions and collections, and critical information about each individual sample. Our office is implementing several informatics initiatives with the goal of better serving the planetary research community. One of these initiatives aims to increase the availability and discoverability of sample data and images through the use of a newly designed common architecture for Astromaterials Curation databases.

  11. Comprehensive Genetic Database of Expressed Sequence Tags for Coccolithophorids

    Science.gov (United States)

    Ranji, Mohammad; Hadaegh, Ahmad R.

    Coccolithophorids are unicellular, marine, golden-brown, single-celled algae (Haptophyta) commonly found in near-surface waters in patchy distributions. They belong to the Phytoplankton family that is known to be responsible for much of the earth reproduction. Phytoplankton, just like plants live based on the energy obtained by Photosynthesis which produces oxygen. Substantial amount of oxygen in the earth's atmosphere is produced by Phytoplankton through Photosynthesis. The single-celled Emiliana Huxleyi is the most commonly known specie of Coccolithophorids and is known for extracting bicarbonate (HCO3) from its environment and producing calcium carbonate to form Coccoliths. Coccolithophorids are one of the world's primary producers, contributing about 15% of the average oceanic phytoplankton biomass to the oceans. They produce elaborate, minute calcite platelets (Coccoliths), covering the cell to form a Coccosphere and supplying up to 60% of the bulk pelagic calcite deposited on the sea floors. In order to understand the genetics of Coccolithophorid and the complexities of their biochemical reactions, we decided to build a database to store a complete profile of these organisms' genomes. Although a variety of such databases currently exist, (http://www.geneservice.co.uk/home/) none have yet been developed to comprehensively address the sequencing efforts underway by the Coccolithophorid research community. This database is called CocooExpress and is available to public (http://bioinfo.csusm.edu) for both data queries and sequence contribution.

  12. Ultrasound versus liver function tests for diagnosis of common bile duct stones.

    Science.gov (United States)

    Gurusamy, Kurinchi Selvan; Giljaca, Vanja; Takwoingi, Yemisi; Higgie, David; Poropat, Goran; Štimac, Davor; Davidson, Brian R

    2015-02-26

    Ultrasound and liver function tests (serum bilirubin and serum alkaline phosphatase) are used as screening tests for the diagnosis of common bile duct stones in people suspected of having common bile duct stones. There has been no systematic review of the diagnostic accuracy of ultrasound and liver function tests. To determine and compare the accuracy of ultrasound versus liver function tests for the diagnosis of common bile duct stones. We searched MEDLINE, EMBASE, Science Citation Index Expanded, BIOSIS, and Clinicaltrials.gov to September 2012. We searched the references of included studies to identify further studies and systematic reviews identified from various databases (Database of Abstracts of Reviews of Effects, Health Technology Assessment, Medion, and ARIF (Aggressive Research Intelligence Facility)). We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We included studies that provided the number of true positives, false positives, false negatives, and true negatives for ultrasound, serum bilirubin, or serum alkaline phosphatase. We only accepted studies that confirmed the presence of common bile duct stones by extraction of the stones (irrespective of whether this was done by surgical or endoscopic methods) for a positive test result, and absence of common bile duct stones by surgical or endoscopic negative exploration of the common bile duct, or symptom-free follow-up for at least six months for a negative test result as the reference standard in people suspected of having common bile duct stones. We included participants with or without prior diagnosis of cholelithiasis; with or without symptoms and complications of common bile duct stones, with or without prior treatment for common bile duct stones; and before or after cholecystectomy. At least two authors screened abstracts and selected studies for inclusion independently. Two authors independently collected data from

  13. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  14. COOL, LCG Conditions Database for the LHC Experiments Development and Deployment Status

    CERN Document Server

    Valassi, A; Clemencic, M; Pucciani, G; Schmidt, S A; Wache, M; CERN. Geneva. IT Department, DM

    2009-01-01

    The COOL project provides common software components and tools for the handling of the conditions data of the LHC experiments. It is part of the LCG Persistency Framework (PF), a broader project set up within the context of the LCG Application Area (AA) to devise common persistency solutions for the LHC experiments. COOL software development is the result of the collaboration between the CERN IT Department and ATLAS and LHCb, the two experiments that have chosen it as the basis of their conditions database infrastructure. COOL supports conditions data persistency using several relational technologies (Oracle, MySQL, SQLite and FroNTier), based on the CORAL Common Relational Abstraction Layer. For both experiments, Oracle is the backend used for the deployment of COOL database services at Tier0 and Tier1 sites of the LHC Computing Grid. While the development of new software functionalities is being frozen as LHC operations are ramping up, the main focus for the project in 2008 has shifted to performance optimi...

  15. Analysis of prescription database extracted from standard textbooks of traditional Dai medicine

    Directory of Open Access Journals (Sweden)

    Zhang Chuang

    2012-08-01

    Full Text Available Abstract Background Traditional Dai Medicine (TDM is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. Objective To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. Methods A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. Results A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. Conclusion The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research.

  16. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  17. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  18. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  19. Acromioclavicular joint: Normal variation and the diagnosis of dislocation

    Energy Technology Data Exchange (ETDEWEB)

    Keats, T.E.; Pope, T.L. Jr.

    1988-04-01

    Acromioclavicular separation is a common traumatic injury. Diagnosis rests on clinical and radiographic findings. However, normal variation in the alignment of the acromioclavicular joint may make the roentgen diagnosis more difficult. We stress the variations of normal alignment at the acromioclavicular joint and offer suggestions for avoiding pitfalls in this clinical situation.

  20. Early results and future challenges of the Danish Fracture Database

    DEFF Research Database (Denmark)

    Gromov, Kirill; Brix, Michael; Kallemose, Thomas

    2014-01-01

    INTRODUCTION: The Danish Fracture Database (DFDB) was established in 2011 to establish nationwide prospective quality assessment of all fracture-related surgery. In this paper, we describe the DFDB's setup, present preliminary data from the first annual report and discuss its future potential...... are registered. Indication for reoperation is also recorded. The reoperation rate and the one-year mortality are the primary indicators of quality. RESULTS: Approximately 10,000 fracture-related surgical procedures were registered in the database at the time of presentation of the first annual DFDB report...... of osteosynthesis were the three most common indications for reoperation and accounted for 34%, 14% and 13%, respectively. CONCLUSION: The DFDB is an online database for registration of fracture-related surgery that allows for basic quality assessment of surgical fracture treatment and large-scale observational...

  1. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  2. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  3. 'Getting back to normal': the added value of an art-based programme in promoting 'recovery' for common but chronic mental health problems.

    Science.gov (United States)

    Makin, Sally; Gask, Linda

    2012-03-01

    OBJECTIVES. The aim of this project was to explore the added value of participation in an Arts on Prescription (AoP) programme to aid the process of recovery in people with common but chronic mental health problems that have already undergone a psychological 'talking'-based therapy. METHODS. The study utilized qualitative in-depth interviews with 15 clients with persistent anxiety and depression who had attended an 'AoP' service and had previously received psychological therapy. RESULTS and discussion. Attending AoP aided the process of recovery, which was perceived by participants as 'returning to normality' through enjoying life again, returning to previous activities, setting goals and stopping dwelling on the past. Most were positive about the benefits they had previously gained from talking therapies. However, these alone were not perceived as having been sufficient to achieve recovery. The AoP offered some specific opportunities in this regard, mediated by the therapeutic and effect of absorption in an activity, the specific creative potential of art, and the social aspects of attending the programme. CONCLUSIONS. For some people who experience persistent or relapsing common mental health problems, participation in an arts-based programme provides 'added value' in aiding recovery in ways not facilitated by talking therapies alone.

  4. U.S. LCI Database Project--Final Phase I Report

    Energy Technology Data Exchange (ETDEWEB)

    2003-08-01

    This Phase I final report reviews the process and provides a plan for the execution of subsequent phases of the database project, including recommended data development priorities and a preliminary cost estimate. The ultimate goal of the project is to develop publicly available LCI Data modules for commonly used materials, products, and processes.

  5. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    International Nuclear Information System (INIS)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.; Mattioda, A. L.; Cami, J.; Peeters, E.; Allamandola, L. J.; Sanchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 μm (5000-5 cm -1 ). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyze and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.

  6. Robustness to non-normality of common tests for the many-sample location problem

    Directory of Open Access Journals (Sweden)

    Azmeri Khan

    2003-01-01

    Full Text Available This paper studies the effect of deviating from the normal distribution assumption when considering the power of two many-sample location test procedures: ANOVA (parametric and Kruskal-Wallis (non-parametric. Power functions for these tests under various conditions are produced using simulation, where the simulated data are produced using MacGillivray and Cannon's [10] recently suggested g-and-k distribution. This distribution can provide data with selected amounts of skewness and kurtosis by varying two nearly independent parameters.

  7. International Shock-Wave Database: Current Status

    Science.gov (United States)

    Levashov, Pavel

    2013-06-01

    speed in the Hugoniot state, and time-dependent free-surface or window-interface velocity profiles. Users are able to search the information in the database and obtain the experimental points in tabular or plain text formats directly via the Internet using common browsers. It is also possible to plot the experimental points for comparison with different approximations and results of equation-of-state calculations. The user can present the results of calculations in text or graphical forms and compare them with any experimental data available in the database. A short history of the shock-wave database will be presented and current possibilities of ISWdb will be demonstrated. Web-site of the project: http://iswdb.info. This work is supported by SNL contracts # 1143875, 1196352.

  8. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  9. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  10. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  11. Comparison of Cloud backup performance and costs in Oracle database

    Directory of Open Access Journals (Sweden)

    Aljaž Zrnec

    2011-06-01

    Full Text Available Normal 0 21 false false false SL X-NONE X-NONE Current practice of backing up data is based on using backup tapes and remote locations for storing data. Nowadays, with the advent of cloud computing a new concept of database backup emerges. The paper presents the possibility of making backup copies of data in the cloud. We are mainly focused on performance and economic issues of making backups in the cloud in comparison to traditional backups. We tested the performance and overall costs of making backup copies of data in Oracle database using Amazon S3 and EC2 cloud services. The costs estimation was performed on the basis of the prices published on Amazon S3 and Amazon EC2 sites.

  12. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  13. The relevance of the IFPE Database to the modelling of WWER-type fuel behaviour

    International Nuclear Information System (INIS)

    Killeen, J.; Sartori, E.

    2006-01-01

    The aim of the International Fuel Performance Experimental Database (IFPE Database) is to provide, in the public domain, a comprehensive and well-qualified database on zircaloy-clad UO 2 fuel for model development and code validation. The data encompass both normal and off-normal operation and include prototypic commercial irradiations as well as experiments performed in Material Testing Reactors. To date, the Database contains over 800 individual cases, providing data on fuel centreline temperatures, dimensional changes and FGR either from in-pile pressure measurements or PIE techniques, including puncturing, Electron Probe Micro Analysis (EPMA) and X-ray Fluorescence (XRF) measurements. This work in assembling and disseminating the Database is carried out in close co-operation and co-ordination between OECD/NEA and the IAEA. The majority of data sets are dedicated to fuel behaviour under LWR irradiation, and every effort has been made to obtain data representative of BWR, PWR and WWER conditions. In each case, the data set contains information on the pre-characterisation of the fuel, cladding and fuel rod geometry, the irradiation history presented in as much detail as the source documents allow, and finally any in-pile or PIE measurements that were made. The purpose of this paper is to highlight data that are relevant specifically to WWER application. To this end, the NEA and IAEA have been successful in obtaining appropriate data for both WWER-440 and WWER-1000-type reactors. These are: 1) Twelve (12) rods from the Finnish-Russian co-operative SOFIT programme; 2) Kola-3 WWER-440 irradiation; 3) MIR ramp tests on Kola-3 rods; 4) Zaporozskaya WWER-1000 irradiation; 5) Novovoronezh WWER-1000 irradiation. Before reviewing these data sets and their usefulness, the paper touches briefly on recent, more novel additions to the Database and on progress made in the use of the Database for the current IAEA FUMEX II Project. Finally, the paper describes the Computer

  14. PairWise Neighbours database: overlaps and spacers among prokaryote genomes

    Directory of Open Access Journals (Sweden)

    Garcia-Vallvé Santiago

    2009-06-01

    Full Text Available Abstract Background Although prokaryotes live in a variety of habitats and possess different metabolic and genomic complexity, they have several genomic architectural features in common. The overlapping genes are a common feature of the prokaryote genomes. The overlapping lengths tend to be short because as the overlaps become longer they have more risk of deleterious mutations. The spacers between genes tend to be short too because of the tendency to reduce the non coding DNA among prokaryotes. However they must be long enough to maintain essential regulatory signals such as the Shine-Dalgarno (SD sequence, which is responsible of an efficient translation. Description PairWise Neighbours is an interactive and intuitive database used for retrieving information about the spacers and overlapping genes among bacterial and archaeal genomes. It contains 1,956,294 gene pairs from 678 fully sequenced prokaryote genomes and is freely available at the URL http://genomes.urv.cat/pwneigh. This database provides information about the overlaps and their conservation across species. Furthermore, it allows the wide analysis of the intergenic regions providing useful information such as the location and strength of the SD sequence. Conclusion There are experiments and bioinformatic analysis that rely on correct annotations of the initiation site. Therefore, a database that studies the overlaps and spacers among prokaryotes appears to be desirable. PairWise Neighbours database permits the reliability analysis of the overlapping structures and the study of the SD presence and location among the adjacent genes, which may help to check the annotation of the initiation sites.

  15. Normal limits of the electrocardiogram derived from a large database of Brazilian primary care patients.

    Science.gov (United States)

    Palhares, Daniel M F; Marcolino, Milena S; Santos, Thales M M; da Silva, José L P; Gomes, Paulo R; Ribeiro, Leonardo B; Macfarlane, Peter W; Ribeiro, Antonio L P

    2017-06-13

    Knowledge of the normal limits of the electrocardiogram (ECG) is mandatory for establishing which patients have abnormal ECGs. No studies have assessed the reference standards for a Latin American population. Our aim was to establish the normal ranges of the ECG for pediatric and adult Brazilian primary care patients. This retrospective observational study assessed all the consecutive 12-lead digital electrocardiograms of primary care patients at least 1 year old in Minas Gerais state, Brazil, recorded between 2010 and 2015. ECGs were excluded if there were technical problems, selected abnormalities were present or patients with selected self-declared comorbidities or on drug therapy. Only the first ECG from patients with multiple ECGs was accepted. The University of Glasgow ECG analysis program was used to automatically interpret the ECGs. For each variable, the 1st, 2nd, 50th, 98th and 99th percentiles were determined and results were compared to selected studies. A total of 1,493,905 ECGs were recorded. 1,007,891 were excluded and 486.014 were analyzed. This large study provided normal values for heart rate, P, QRS and T frontal axis, P and QRS overall duration, PR and QT overall intervals and QTc corrected by Hodges, Bazett, Fridericia and Framingham formulae. Overall, the results were similar to those from other studies performed in different populations but there were differences in extreme ages and specific measurements. This study has provided reference values for Latinos of both sexes older than 1 year. Our results are comparable to studies performed in different populations.

  16. Normal-range verbal-declarative memory in schizophrenia.

    Science.gov (United States)

    Heinrichs, R Walter; Parlar, Melissa; Pinnock, Farena

    2017-10-01

    Cognitive impairment is prevalent and related to functional outcome in schizophrenia, but a significant minority of the patient population overlaps with healthy controls on many performance measures, including declarative-verbal-memory tasks. In this study, we assessed the validity, clinical, and functional implications of normal-range (NR), verbal-declarative memory in schizophrenia. Performance normality was defined using normative data for 8 basic California Verbal Learning Test (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) recall and recognition trials. Schizophrenia patients (n = 155) and healthy control participants (n = 74) were assessed for performance normality, defined as scores within 1 SD of the normative mean on all 8 trials, and assigned to normal- and below-NR memory groups. NR schizophrenia patients (n = 26) and control participants (n = 51) did not differ in general verbal ability, on a reading-based estimate of premorbid ability, across all 8 CVLT-II-score comparisons or in terms of intrusion and false-positive errors and auditory working memory. NR memory patients did not differ from memory-impaired patients (n = 129) in symptom severity, and both patient groups were significantly and similarly disabled in terms of functional status in the community. These results confirm a subpopulation of schizophrenia patients with normal, verbal-declarative-memory performance and no evidence of decline from higher premorbid ability levels. However, NR patients did not experience less severe psychopathology, nor did they show advantage in community adjustment relative to impaired patients. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Calibration of gamma camera systems for a multicentre European ¹²³I-FP-CIT SPECT normal database

    DEFF Research Database (Denmark)

    Tossici-Bolt, Livia; Dickson, John C; Sera, Terez

    2011-01-01

    A joint initiative of the European Association of Nuclear Medicine (EANM) Neuroimaging Committee and EANM Research Ltd. aimed to generate a European database of [(123)I]FP-CIT single photon emission computed tomography (SPECT) scans of healthy controls. This study describes the characterization...

  18. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  19. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    Science.gov (United States)

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  1. A LITERATURE SURVEY ON VARIOUS ILLUMINATION NORMALIZATION TECHNIQUES FOR FACE RECOGNITION WITH FUZZY K NEAREST NEIGHBOUR CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Thamizharasi

    2015-05-01

    Full Text Available The face recognition is popular in video surveillance, social networks and criminal identifications nowadays. The performance of face recognition would be affected by variations in illumination, pose, aging and partial occlusion of face by Wearing Hats, scarves and glasses etc. The illumination variations are still the challenging problem in face recognition. The aim is to compare the various illumination normalization techniques. The illumination normalization techniques include: Log transformations, Power Law transformations, Histogram equalization, Adaptive histogram equalization, Contrast stretching, Retinex, Multi scale Retinex, Difference of Gaussian, DCT, DCT Normalization, DWT, Gradient face, Self Quotient, Multi scale Self Quotient and Homomorphic filter. The proposed work consists of three steps. First step is to preprocess the face image with the above illumination normalization techniques; second step is to create the train and test database from the preprocessed face images and third step is to recognize the face images using Fuzzy K nearest neighbor classifier. The face recognition accuracy of all preprocessing techniques is compared using the AR face database of color images.

  2. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    cloud models that satisfy different problem 1.2. THE BIG DATA CHALLENGE 3 Enterprise Big Data - Interactive - On-demand - Virtualization - Java ...replication. Data loss can only occur if three drives fail prior to any one of the failures being corrected. Hadoop is written in Java and is installed in a...visible view into a dataset. There are many popular database management systems such as MySQL [4], PostgreSQL [63], and Oracle [5]. Most commonly

  3. Construction of In-house Databases in a Corporation

    Science.gov (United States)

    Tamura, Haruki; Mezaki, Koji

    This paper describes fundamental idea of technical information management in Mitsubishi Heavy Industries, Ltd., and present status of the activities. Then it introduces the background and history of the development, problems and countermeasures against them regarding Mitsubishi Heavy Industries Technical Information Retrieval System (called MARON) which started its service in May, 1985. The system deals with databases which cover information common to the whole company (in-house research and technical reports, holding information of books, journals and so on), and local information held in each business division or department. Anybody from any division can access to these databases through the company-wide network. The in-house interlibrary loan subsystem called Orderentry is available, which supports acquiring operation of original materials.

  4. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  5. Food composition database development for between country comparisons

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2006-01-01

    Full Text Available Abstract Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  6. Food composition database development for between country comparisons.

    Science.gov (United States)

    Merchant, Anwar T; Dehghan, Mahshid

    2006-01-19

    Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  7. CT of Normal Developmental and Variant Anatomy of the Pediatric Skull: Distinguishing Trauma from Normality.

    Science.gov (United States)

    Idriz, Sanjin; Patel, Jaymin H; Ameli Renani, Seyed; Allan, Rosemary; Vlahos, Ioannis

    2015-01-01

    The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients. (©)RSNA, 2015.

  8. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Validation of the diagnosis canine epilepsy in a Swedish animal insurance database against practice records

    DEFF Research Database (Denmark)

    Heske, Linda; Berendt, Mette; Jäderlund, Karin Hultin

    2014-01-01

    Canine epilepsy is one of the most common neurological conditions in dogs but the actual incidence of the disease remains unknown. A Swedish animal insurance database has previously been shown useful for the study of disease occurrence in companion animals. The dogs insured by this company...... represent a unique population for epidemiological studies, because they are representative of the general dog population in Sweden and are followed throughout their life allowing studies of disease incidence to be performed. The database covers 50% of all insured dogs (in the year 2012) which represents 40......% of the national dog population. Most commonly, dogs are covered by both veterinary care insurance and life insurance. Previous studies have shown that the general data quality is good, but the validity of a specific diagnosis should be examined carefully before using the database for incidence calculations...

  10. Protein structure database search and evolutionary classification.

    Science.gov (United States)

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  11. The Porcelain Crab Transcriptome and PCAD, the Porcelain Crab Microarray and Sequence Database

    Energy Technology Data Exchange (ETDEWEB)

    Tagmount, Abderrahmane; Wang, Mei; Lindquist, Erika; Tanaka, Yoshihiro; Teranishi, Kristen S.; Sunagawa, Shinichi; Wong, Mike; Stillman, Jonathon H.

    2010-01-27

    Background: With the emergence of a completed genome sequence of the freshwater crustacean Daphnia pulex, construction of genomic-scale sequence databases for additional crustacean sequences are important for comparative genomics and annotation. Porcelain crabs, genus Petrolisthes, have been powerful crustacean models for environmental and evolutionary physiology with respect to thermal adaptation and understanding responses of marine organisms to climate change. Here, we present a large-scale EST sequencing and cDNA microarray database project for the porcelain crab Petrolisthes cinctipes. Methodology/Principal Findings: A set of ~;;30K unique sequences (UniSeqs) representing ~;;19K clusters were generated from ~;;98K high quality ESTs from a set of tissue specific non-normalized and mixed-tissue normalized cDNA libraries from the porcelain crab Petrolisthes cinctipes. Homology for each UniSeq was assessed using BLAST, InterProScan, GO and KEGG database searches. Approximately 66percent of the UniSeqs had homology in at least one of the databases. All EST and UniSeq sequences along with annotation results and coordinated cDNA microarray datasets have been made publicly accessible at the Porcelain Crab Array Database (PCAD), a feature-enriched version of the Stanford and Longhorn Array Databases.Conclusions/Significance: The EST project presented here represents the third largest sequencing effort for any crustacean, and the largest effort for any crab species. Our assembly and clustering results suggest that our porcelain crab EST data set is equally diverse to the much larger EST set generated in the Daphnia pulex genome sequencing project, and thus will be an important resource to the Daphnia research community. Our homology results support the pancrustacea hypothesis and suggest that Malacostraca may be ancestral to Branchiopoda and Hexapoda. Our results also suggest that our cDNA microarrays cover as much of the transcriptome as can reasonably be captured in

  12. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  13. A computational approach to distinguish somatic vs. germline origin of genomic alterations from deep sequencing of cancer specimens without a matched normal.

    Directory of Open Access Journals (Sweden)

    James X Sun

    2018-02-01

    Full Text Available A key constraint in genomic testing in oncology is that matched normal specimens are not commonly obtained in clinical practice. Thus, while well-characterized genomic alterations do not require normal tissue for interpretation, a significant number of alterations will be unknown in whether they are germline or somatic, in the absence of a matched normal control. We introduce SGZ (somatic-germline-zygosity, a computational method for predicting somatic vs. germline origin and homozygous vs. heterozygous or sub-clonal state of variants identified from deep massively parallel sequencing (MPS of cancer specimens. The method does not require a patient matched normal control, enabling broad application in clinical research. SGZ predicts the somatic vs. germline status of each alteration identified by modeling the alteration's allele frequency (AF, taking into account the tumor content, tumor ploidy, and the local copy number. Accuracy of the prediction depends on the depth of sequencing and copy number model fit, which are achieved in our clinical assay by sequencing to high depth (>500x using MPS, covering 394 cancer-related genes and over 3,500 genome-wide single nucleotide polymorphisms (SNPs. Calls are made using a statistic based on read depth and local variability of SNP AF. To validate the method, we first evaluated performance on samples from 30 lung and colon cancer patients, where we sequenced tumors and matched normal tissue. We examined predictions for 17 somatic hotspot mutations and 20 common germline SNPs in 20,182 clinical cancer specimens. To assess the impact of stromal admixture, we examined three cell lines, which were titrated with their matched normal to six levels (10-75%. Overall, predictions were made in 85% of cases, with 95-99% of variants predicted correctly, a significantly superior performance compared to a basic approach based on AF alone. We then applied the SGZ method to the COSMIC database of known somatic variants

  14. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  15. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    Science.gov (United States)

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Soil Properties Database of Spanish Soils. Volumen XIII.- Navarra and La Rioja

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Lago, C.; Roquero, C; Magister, M.

    1999-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidades Autonomas of Navarra and La Rioja. (Author) 46 refs

  17. The reactive metabolite target protein database (TPDB – a web-accessible resource

    Directory of Open Access Journals (Sweden)

    Dong Yinghua

    2007-03-01

    Full Text Available Abstract Background The toxic effects of many simple organic compounds stem from their biotransformation to chemically reactive metabolites which bind covalently to cellular proteins. To understand the mechanisms of cytotoxic responses it may be important to know which proteins become adducted and whether some may be common targets of multiple toxins. The literature of this field is widely scattered but expanding rapidly, suggesting the need for a comprehensive, searchable database of reactive metabolite target proteins. Description The Reactive Metabolite Target Protein Database (TPDB is a comprehensive, curated, searchable, documented compilation of publicly available information on the protein targets of reactive metabolites of 18 well-studied chemicals and drugs of known toxicity. TPDB software enables i string searches for author names and proteins names/synonyms, ii more complex searches by selecting chemical compound, animal species, target tissue and protein names/synonyms from pull-down menus, and iii commonality searches over multiple chemicals. Tabulated search results provide information, references and links to other databases. Conclusion The TPDB is a unique on-line compilation of information on the covalent modification of cellular proteins by reactive metabolites of chemicals and drugs. Its comprehensiveness and searchability should facilitate the elucidation of mechanisms of reactive metabolite toxicity. The database is freely available at http://tpdb.medchem.ku.edu/tpdb.html

  18. Creation and characterization of Japanese standards for myocardial perfusion SPECT. Database from the Japanese Society of Nuclear Medicine Working Group

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Kumita, Shinichiro; Ishida, Yoshio

    2007-01-01

    Standards for myocardial single-photon emission computed tomography (SPECT) adapted for a Japanese population were not available. The purpose of this study was to create standard files approved by the Japanese Society of Nuclear Medicine and to make known the characteristics of the myocardial perfusion pattern of this population. With the collaboration of nine hospitals, a total of 326 sets of exercise-rest myocardial perfusion images were accumulated from subjects with a low likelihood of cardiac diseases. The normal database included a 99m Tc-methoxyisobutylisonitrile (MIBI)/tetrofosmin myocardial perfusion study with 360 deg (n=80) and 180 deg (n=56) rotations, 201 Tl study with 360 deg (n=115) and 180 deg rotations (n=54) and a dual-isotope study with 360 deg rotation (n=27). The projection images were transferred by digital imaging and communications in medicine (DICOM) format and reconstructed and analyzed with polar maps. The projection data from multiple centers were successfully transferred to a common format for SPECT reconstruction. When the average values were analyzed using a 17-segment model, myocardial counts in the septal segment differed significantly between 180 deg and 360 deg rotation acquisitions. Regional differences were observed between men and women in the inferior and anterior regions. A tracer difference between 99m Tc and 201 Tl was also observed in some segments. The attenuation patterns differed significantly between subjects from the United States and those from Japan. Myocardial perfusion data that were specific for the Japanese population were generated. The normal database can serve a standard for nuclear cardiology work conducted in Japan. (author)

  19. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  20. Virtual materials design using databases of calculated materials properties

    International Nuclear Information System (INIS)

    Munter, T R; Landis, D D; Abild-Pedersen, F; Jones, G; Wang, S; Bligaard, T

    2009-01-01

    Materials design is most commonly carried out by experimental trial and error techniques. Current trends indicate that the increased complexity of newly developed materials, the exponential growth of the available computational power, and the constantly improving algorithms for solving the electronic structure problem, will continue to increase the relative importance of computational methods in the design of new materials. One possibility for utilizing electronic structure theory in the design of new materials is to create large databases of materials properties, and subsequently screen these for new potential candidates satisfying given design criteria. We utilize a database of more than 81 000 electronic structure calculations. This alloy database is combined with other published materials properties to form the foundation of a virtual materials design framework (VMDF). The VMDF offers a flexible collection of materials databases, filters, analysis tools and visualization methods, which are particularly useful in the design of new functional materials and surface structures. The applicability of the VMDF is illustrated by two examples. One is the determination of the Pareto-optimal set of binary alloy methanation catalysts with respect to catalytic activity and alloy stability; the other is the search for new alloy mercury absorbers.

  1. Value of shared preclinical safety studies - The eTOX database.

    Science.gov (United States)

    Briggs, Katharine; Barber, Chris; Cases, Montserrat; Marc, Philippe; Steger-Hartmann, Thomas

    2015-01-01

    A first analysis of a database of shared preclinical safety data for 1214 small molecule drugs and drug candidates extracted from 3970 reports donated by thirteen pharmaceutical companies for the eTOX project (www.etoxproject.eu) is presented. Species, duration of exposure and administration route data were analysed to assess if large enough subsets of homogenous data are available for building in silico predictive models. Prevalence of treatment related effects for the different types of findings recorded were analysed. The eTOX ontology was used to determine the most common treatment-related clinical chemistry and histopathology findings reported in the database. The data were then mined to evaluate sensitivity of established in vivo biomarkers for liver toxicity risk assessment. The value of the database to inform other drug development projects during early drug development is illustrated by a case study.

  2. Evaluation of an Online Instructional Database Accessed by QR Codes to Support Biochemistry Practical Laboratory Classes

    Science.gov (United States)

    Yip, Tor; Melling, Louise; Shaw, Kirsty J.

    2016-01-01

    An online instructional database containing information on commonly used pieces of laboratory equipment was created. In order to make the database highly accessible and to promote its use, QR codes were utilized. The instructional materials were available anytime and accessed using QR codes located on the equipment itself and within undergraduate…

  3. Normal SPECT thallium-201 bull's-eye display: gender differences

    International Nuclear Information System (INIS)

    Eisner, R.L.; Tamas, M.J.; Cloninger, K.

    1988-01-01

    The bull's-eye technique synthesizes three-dimensional information from single photon emission computed tomographic 201 TI images into two dimensions so that a patient's data can be compared quantitatively against a normal file. To characterize the normal database and to clarify differences between males and females, clinical data and exercise electrocardiography were used to identify 50 males and 50 females with less than 5% probability of coronary artery disease. Results show inhomogeneity of the 201 TI distributions at stress and delay: septal to lateral wall count ratios are less than 1.0 in both females and males; anterior to inferior wall count ratios are greater than 1.0 in males but are approximately equal to 1.0 in females. Washout rate is faster in females than males at the same peak exercise heart rate and systolic blood pressure, despite lower exercise time. These important differences suggest that quantitative analysis of single photon emission computed tomographic 201 TI images requires gender-matched normal files

  4. Performance Analysis of Ten Common QRS Detectors on Different ECG Application Cases

    Directory of Open Access Journals (Sweden)

    Feifei Liu

    2018-01-01

    Full Text Available A systematical evaluation work was performed on ten widely used and high-efficient QRS detection algorithms in this study, aiming at verifying their performances and usefulness in different application situations. Four experiments were carried on six internationally recognized databases. Firstly, in the test of high-quality ECG database versus low-quality ECG database, for high signal quality database, all ten QRS detection algorithms had very high detection accuracy (F1 >99%, whereas the F1 results decrease significantly for the poor signal-quality ECG signals (all 95% except RS slope algorithm with 94.24% on normal ECG database and 94.44% on arrhythmia database. Thirdly, for the paced rhythm ECG database, all ten algorithms were immune to the paced beats (>94% except the RS slope method, which only output a low F1 result of 78.99%. At last, the detection accuracies had obvious decreases when dealing with the dynamic telehealth ECG signals (all <80% except OKB algorithm with 80.43%. Furthermore, the time costs from analyzing a 10 s ECG segment were given as the quantitative index of the computational complexity. All ten algorithms had high numerical efficiency (all <4 ms except RS slope (94.07 ms and sixth power algorithms (8.25 ms. And OKB algorithm had the highest numerical efficiency (1.54 ms.

  5. Cataplexy with Normal Sleep Studies and Normal CSF Hypocretin: An Explanation?

    Science.gov (United States)

    Drakatos, Panagis; Leschziner, Guy

    2016-03-01

    Patients with narcolepsy usually develop excessive daytime sleepiness (EDS) before or coincide with the occurrence of cataplexy, with the latter most commonly associated with low cerebrospinal fluid (CSF) hypocretin-1 levels. Cataplexy preceding the development of other features of narcolepsy is a rare phenomenon. We describe a case of isolated cataplexy in the context of two non-diagnostic multiple sleep latency tests and normal CSF-hypocretin-1 levels (217 pg/mL) who gradually developed EDS and low CSF-hypocretin-1 (< 110 pg/mL). © 2016 American Academy of Sleep Medicine.

  6. Breach Risk Magnitude: A Quantitative Measure of Database Security.

    Science.gov (United States)

    Yasnoff, William A

    2016-01-01

    A quantitative methodology is described that provides objective evaluation of the potential for health record system breaches. It assumes that breach risk increases with the number of potential records that could be exposed, while it decreases when more authentication steps are required for access. The breach risk magnitude (BRM) is the maximum value for any system user of the common logarithm of the number of accessible database records divided by the number of authentication steps needed to achieve such access. For a one million record relational database, the BRM varies from 5.52 to 6 depending on authentication protocols. For an alternative data architecture designed specifically to increase security by separately storing and encrypting each patient record, the BRM ranges from 1.3 to 2.6. While the BRM only provides a limited quantitative assessment of breach risk, it may be useful to objectively evaluate the security implications of alternative database organization approaches.

  7. MRI information for commonly used otologic implants: review and update.

    Science.gov (United States)

    Azadarmaki, Roya; Tubbs, Rhonda; Chen, Douglas A; Shellock, Frank G

    2014-04-01

    To review information on magnetic resonance imaging (MRI) issues for commonly used otologic implants. Manufacturing companies, National Library of Medicine's online database, and an additional online database (www.MRIsafety.com). A literature review of the National Library of Medicine's online database with focus on MRI issues for otologic implants was performed. The MRI information on implants provided by manufacturers was reviewed. Baha and Ponto Pro osseointegrated implants' abutment and fixture and the implanted magnet of the Sophono Alpha 1 and 2 abutment-free systems are approved for 3-Tesla magnetic resonance (MR) systems. The external processors of these devices are MR Unsafe. Of the implants tested, middle ear ossicular prostheses, including stapes prostheses, except for the 1987 McGee prosthesis, are MR Conditional for 1.5-Tesla (and many are approved for 3-Tesla) MR systems. Cochlear implants with removable magnets are approved for patients undergoing MRI at 1.5 Tesla after magnet removal. The MED-EL PULSAR, SONATA, CONCERT, and CONCERT PIN cochlear implants can be used in patients undergoing MRI at 1.5 Tesla with application of a protective bandage. The MED-EL COMBI 40+ can be used in 0.2-Tesla MR systems. Implants made from nonmagnetic and nonconducting materials are MR Safe. Knowledge of MRI guidelines for commonly used otologic implants is important. Guidelines on MRI issues approved by the US Food and Drug Administration are not always the same compared with other parts of the world. This monograph provides a current reference for physicians on MRI issues for commonly used otologic implants.

  8. Spinal cord normalization in multiple sclerosis.

    Science.gov (United States)

    Oh, Jiwon; Seigo, Michaela; Saidha, Shiv; Sotirchos, Elias; Zackowski, Kathy; Chen, Min; Prince, Jerry; Diener-West, Marie; Calabresi, Peter A; Reich, Daniel S

    2014-01-01

    Spinal cord (SC) pathology is common in multiple sclerosis (MS), and measures of SC-atrophy are increasingly utilized. Normalization reduces biological variation of structural measurements unrelated to disease, but optimal parameters for SC volume (SCV)-normalization remain unclear. Using a variety of normalization factors and clinical measures, we assessed the effect of SCV normalization on detecting group differences and clarifying clinical-radiological correlations in MS. 3T cervical SC-MRI was performed in 133 MS cases and 11 healthy controls (HC). Clinical assessment included expanded disability status scale (EDSS), MS functional composite (MSFC), quantitative hip-flexion strength ("strength"), and vibration sensation threshold ("vibration"). SCV between C3 and C4 was measured and normalized individually by subject height, SC-length, and intracranial volume (ICV). There were group differences in raw-SCV and after normalization by height and length (MS vs. HC; progressive vs. relapsing MS-subtypes, P normalization by length (EDSS:r = -.43; MSFC:r = .33; strength:r = .38; vibration:r = -.40), and height (EDSS:r = -.26; MSFC:r = .28; strength:r = .22; vibration:r = -.29), but diminished with normalization by ICV (EDSS:r = -.23; MSFC:r = -.10; strength:r = .23; vibration:r = -.35). In relapsing MS, normalization by length allowed statistical detection of correlations that were not apparent with raw-SCV. SCV-normalization by length improves the ability to detect group differences, strengthens clinical-radiological correlations, and is particularly relevant in settings of subtle disease-related SC-atrophy in MS. SCV-normalization by length may enhance the clinical utility of measures of SC-atrophy. Copyright © 2014 by the American Society of Neuroimaging.

  9. Asynchronous data change notification between database server and accelerator control systems

    International Nuclear Information System (INIS)

    Wenge Fu; Seth Nemesure; Morris, J.

    2012-01-01

    Database data change notification (DCN) is a commonly used feature, it allows to be informed when the data has been changed on the server side by another client. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. (authors)

  10. DMPD: Pathogen-induced apoptosis of macrophages: a common end for different pathogenicstrategies. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 11207583 Pathogen-induced apoptosis of macrophages: a common end for different path...ml) Show Pathogen-induced apoptosis of macrophages: a common end for different pathogenicstrategies. PubmedI...D 11207583 Title Pathogen-induced apoptosis of macrophages: a common end for diff

  11. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  12. Lessons Learned from resolving massive IPS database change for SPADES+

    International Nuclear Information System (INIS)

    Kim, Jin-Soo

    2016-01-01

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT_ID is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT_IDs, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT_IDs are in IPS database. Changing POINT_IDs could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database

  13. Full Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen

  14. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  15. Constructing a Geology Ontology Using a Relational Database

    Science.gov (United States)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  16. Research on the establishment of the database system for R and D on the innovative technology for the earth; Chikyu kankyo sangyo gijutsu kenkyu kaihatsuyo database system ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-03-01

    For the purpose of structuring a database system of technical information about the earth environmental issues, the `database system for R and D of the earth environmental industrial technology` was operationally evaluated, and study was made to open it and structure a prototype of database. In the present state as pointed out in the operational evaluation, the utilization frequency is not heightened due to lack of UNIX experience, absence of system managers and shortage of utilizable articles listed, so that the renewal of database does not ideally progress. Therefore, study was then made to introduce tools utilizable by the initiators and open the information access terminal to the researchers at headquarters utilizing the internet. In order for the earth environment-related researchers to easily obtain the information, a database was prototypically structured to support the research exchange. Tasks were made clear to be taken for selecting the fields of research and compiling common thesauri in Japanese, Western and other languages. 28 figs., 16 tabs.

  17. Are common names becoming less common? The rise in uniqueness and individualism in Japan

    Directory of Open Access Journals (Sweden)

    Yuji eOgihara

    2015-10-01

    Full Text Available We examined whether Japanese culture has become more individualistic by investigating how the practice of naming babies has changed over time. Cultural psychology has revealed substantial cultural variation in human psychology and behavior, emphasizing the mutual construction of socio-cultural environment and mind. However, much of the past research did not account for the fact that culture is changing. Indeed, archival data on behavior (e.g., divorce rates suggest a rise in individualism in the U.S. and Japan. In addition to archival data, cultural products (which express an individual’s psyche and behavior outside the head; e.g., advertising can also reveal cultural change. However, little research has investigated the changes in individualism in East Asia using cultural products. To reveal the dynamic aspects of culture, it is important to present temporal data across cultures. In this study, we examined baby names as a cultural product. If Japanese culture has become more individualistic, parents would be expected to give their children unique names. Using two databases, we calculated the rate of popular baby names between 2004 and 2013. Both databases released the rankings of popular names and their rates within the sample. As Japanese names are generally comprised of both written Chinese characters and their pronunciations, we analyzed these two separately. We found that the rate of popular Chinese characters increased, whereas the rate of popular pronunciations decreased. However, only the rate of popular pronunciations was associated with a previously validated collectivism index. Moreover, we examined the pronunciation variation of common combinations of Chinese characters and the written form variation of common pronunciations. We found that the variation of written forms decreased, whereas the variation of pronunciations increased over time. Taken together, these results showed that parents are giving their children unique names by

  18. Are common names becoming less common? The rise in uniqueness and individualism in Japan

    Science.gov (United States)

    Ogihara, Yuji; Fujita, Hiroyo; Tominaga, Hitoshi; Ishigaki, Sho; Kashimoto, Takuya; Takahashi, Ayano; Toyohara, Kyoko; Uchida, Yukiko

    2015-01-01

    We examined whether Japanese culture has become more individualistic by investigating how the practice of naming babies has changed over time. Cultural psychology has revealed substantial cultural variation in human psychology and behavior, emphasizing the mutual construction of socio-cultural environment and mind. However, much of the past research did not account for the fact that culture is changing. Indeed, archival data on behavior (e.g., divorce rates) suggest a rise in individualism in the U.S. and Japan. In addition to archival data, cultural products (which express an individual’s psyche and behavior outside the head; e.g., advertising) can also reveal cultural change. However, little research has investigated the changes in individualism in East Asia using cultural products. To reveal the dynamic aspects of culture, it is important to present temporal data across cultures. In this study, we examined baby names as a cultural product. If Japanese culture has become more individualistic, parents would be expected to give their children unique names. Using two databases, we calculated the rate of popular baby names between 2004 and 2013. Both databases released the rankings of popular names and their rates within the sample. As Japanese names are generally comprised of both written Chinese characters and their pronunciations, we analyzed these two separately. We found that the rate of popular Chinese characters increased, whereas the rate of popular pronunciations decreased. However, only the rate of popular pronunciations was associated with a previously validated collectivism index. Moreover, we examined the pronunciation variation of common combinations of Chinese characters and the written form variation of common pronunciations. We found that the variation of written forms decreased, whereas the variation of pronunciations increased over time. Taken together, these results showed that parents are giving their children unique names by pairing common

  19. CSE database: extended annotations and new recommendations for ECG software testing.

    Science.gov (United States)

    Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie

    2017-08-01

    Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow

  20. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  1. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  2. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  3. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  4. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Braams, Bastiaan J.; Chung, Hyun-Kyung [Nuclear Data Section, NAPC Division, International Atomic Energy Agency P. O. Box 100, Vienna International Centre, AT-1400 Vienna (Austria)

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  5. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Science.gov (United States)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  6. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    International Nuclear Information System (INIS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-01-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  7. Machine learning approach to detect intruders in database based on hexplet data structure

    Directory of Open Access Journals (Sweden)

    Saad M. Darwish

    2016-09-01

    Full Text Available Most of valuable information resources for any organization are stored in the database; it is a serious subject to protect this information against intruders. However, conventional security mechanisms are not designed to detect anomalous actions of database users. An intrusion detection system (IDS, delivers an extra layer of security that cannot be guaranteed by built-in security tools, is the ideal solution to defend databases from intruders. This paper suggests an anomaly detection approach that summarizes the raw transactional SQL queries into a compact data structure called hexplet, which can model normal database access behavior (abstract the user's profile and recognize impostors specifically tailored for role-based access control (RBAC database system. This hexplet lets us to preserve the correlation among SQL statements in the same transaction by exploiting the information in the transaction-log entry with the aim to improve detection accuracy specially those inside the organization and behave strange behavior. The model utilizes naive Bayes classifier (NBC as the simplest supervised learning technique for creating profiles and evaluating the legitimacy of a transaction. Experimental results show the performance of the proposed model in the term of detection rate.

  8. The acromioclavicular joint: Normal variation and the diagnosis of dislocation

    International Nuclear Information System (INIS)

    Keats, T.E.; Pope, T.L. Jr.

    1988-01-01

    Acromioclavicular separation is a common traumatic injury. Diagnosis rests on clinical and radiographic findings. However, normal variation in the alignment of the acromioclavicular joint may make the roentgen diagnosis more difficult. We stress the variations of normal alignment at the acromioclavicular joint and offer suggestions for avoiding pitfalls in this clinical situation. (orig.)

  9. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  10. Fish Karyome: A karyological information network database of Indian Fishes.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  11. INE: a rice genome database with an integrated map view.

    Science.gov (United States)

    Sakata, K; Antonio, B A; Mukai, Y; Nagasaki, H; Sakai, Y; Makino, K; Sasaki, T

    2000-01-01

    The Rice Genome Research Program (RGP) launched a large-scale rice genome sequencing in 1998 aimed at decoding all genetic information in rice. A new genome database called INE (INtegrated rice genome Explorer) has been developed in order to integrate all the genomic information that has been accumulated so far and to correlate these data with the genome sequence. A web interface based on Java applet provides a rapid viewing capability in the database. The first operational version of the database has been completed which includes a genetic map, a physical map using YAC (Yeast Artificial Chromosome) clones and PAC (P1-derived Artificial Chromosome) contigs. These maps are displayed graphically so that the positional relationships among the mapped markers on each chromosome can be easily resolved. INE incorporates the sequences and annotations of the PAC contig. A site on low quality information ensures that all submitted sequence data comply with the standard for accuracy. As a repository of rice genome sequence, INE will also serve as a common database of all sequence data obtained by collaborating members of the International Rice Genome Sequencing Project (IRGSP). The database can be accessed at http://www. dna.affrc.go.jp:82/giot/INE. html or its mirror site at http://www.staff.or.jp/giot/INE.html

  12. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Directory of Open Access Journals (Sweden)

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  13. Selective attention in normal and impaired hearing.

    Science.gov (United States)

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  14. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ... ...ve site is opened. 2000/10/1 Rice Expression Database ( http://red.dna.affrc.go.jp/RED/ ) is opened. About Thi

  15. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  16. The Candidate Cancer Gene Database: a database of cancer driver genes from forward genetic screens in mice.

    Science.gov (United States)

    Abbott, Kenneth L; Nyre, Erik T; Abrahante, Juan; Ho, Yen-Yi; Isaksson Vogel, Rachel; Starr, Timothy K

    2015-01-01

    Identification of cancer driver gene mutations is crucial for advancing cancer therapeutics. Due to the overwhelming number of passenger mutations in the human tumor genome, it is difficult to pinpoint causative driver genes. Using transposon mutagenesis in mice many laboratories have conducted forward genetic screens and identified thousands of candidate driver genes that are highly relevant to human cancer. Unfortunately, this information is difficult to access and utilize because it is scattered across multiple publications using different mouse genome builds and strength metrics. To improve access to these findings and facilitate meta-analyses, we developed the Candidate Cancer Gene Database (CCGD, http://ccgd-starrlab.oit.umn.edu/). The CCGD is a manually curated database containing a unified description of all identified candidate driver genes and the genomic location of transposon common insertion sites (CISs) from all currently published transposon-based screens. To demonstrate relevance to human cancer, we performed a modified gene set enrichment analysis using KEGG pathways and show that human cancer pathways are highly enriched in the database. We also used hierarchical clustering to identify pathways enriched in blood cancers compared to solid cancers. The CCGD is a novel resource available to scientists interested in the identification of genetic drivers of cancer. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Update History of This Database - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RPD Update History of This Database Date Update contents 2016/02/02 Rice Proteome Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RPD | LSDB Archive ... ...ve site is opened. 2003/01/07 Rice Proteome Database ( http://gene64.dna.affrc.go.jp/RPD/ ) is opened. About Thi

  18. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    International Nuclear Information System (INIS)

    Arthur, R.C.

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to use a

  19. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, R.C. [Monitor Scientific, LLC, Denver, CO (United States)

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to

  20. Relational databases for rare disease study: application to vascular anomalies.

    Science.gov (United States)

    Perkins, Jonathan A; Coltrera, Marc D

    2008-01-01

    To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.

  1. DB-PABP: a database of polyanion-binding proteins.

    Science.gov (United States)

    Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Middaugh, C Russell

    2008-01-01

    The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references.

  2. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  3. The database on transgenic luminescent microorganisms as an instrument of studying a microbial component of closed ecosystems

    Science.gov (United States)

    Boyandin, A. N.; Lankin, Y. P.; Kargatova, T. V.; Popova, L. Y.; Pechurkin, N. S.

    Luminescent transgenic microorganisms are widely used for study of microbial communities' functioning including closed ones. Bioluminescence is of high sensitive to effects of different environmental factors. Integration of lux-genes into different metabolic ways allows studying many aspects of microorganisms' life permitting to carry out measurements in situ. There is much information about applications of bioluminescent bacteria in different researches. But for effective using these data their summarizing and accumulation in common source is required. Therefore an information system on characteristics of transgenic microorganisms with cloned lux-genes was created. The database and client software related were developed. A database structure includes information on common characteristics of cloned lux-genes, their sources and properties, on regulation of gene expression in bacterial cells, on dependence of bioluminescence manifestation on biotic, abiotic and anthropogenic environmental factors. The database also can store description of changes in bacterial populations depending on environmental changes. The database created allows storing and using bibliographic information and also links to web sites of world collections of microorganisms. Internet publishing software permitting to open access to the database through the Internet is developed.

  4. The Dens: Normal Development, Developmental Variants and Anomalies, and Traumatic Injuries

    Directory of Open Access Journals (Sweden)

    William T O′Brien

    2015-01-01

    Full Text Available Accurate interpretation of cervical spine imagining can be challenging, especially in children and the elderly. The biomechanics of the developing pediatric spine and age-related degenerative changes predispose these patient populations to injuries centered at the craniocervical junction. In addition, congenital anomalies are common in this region, especially those associated with the axis/dens, due to its complexity in terms of development compared to other vertebral levels. The most common congenital variations of the dens include the os odontoideum and a persistent ossiculum terminale. At times, it is necessary to distinguish normal development, developmental variants, and developmental anomalies from traumatic injuries in the setting of acute traumatic injury. Key imaging features are useful to differentiate between traumatic fractures and normal or variant anatomy acutely; however, the radiologist must first have a basic understanding of the spectrum of normal developmental anatomy and its anatomic variations in order to make an accurate assessment. This review article attempts to provide the basic framework required for accurate interpretation of cervical spine imaging with a focus on the dens, specifically covering the normal development and ossification of the dens, common congenital variants and their various imaging appearances, fracture classifications, imaging appearances, and treatment options.

  5. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov (United States)

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users to objectively review and compare analysis results that are based on similar source of critically reviewed LCI data through its LCI Database Project. NREL's High-Performance

  6. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  7. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  8. Soil Properties Database of Spanish Soils Volume II.- Asturias, Cantabria and Pais Vasco

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Roquero, C.; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore. an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidades Autonomas de Asturias, Cantabria and Pais Vasco. (Author) 34 refs

  9. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  10. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  11. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  12. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  13. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  14. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  15. Hydrometeorological Database (HMDB) for Practical Research in Ecology

    OpenAIRE

    Novakovskiy, A; Elsakov, V

    2014-01-01

    The regional HydroMeteorological DataBase (HMDB) was designed for easy access to climate data via the Internet. It contains data on various climatic parameters (temperature, precipitation, pressure, humidity, and wind strength and direction) from 190 meteorological stations in Russia and bordering countries for a period of instrumental observations of over 100 years. Open sources were used to ingest data into HMDB. An analytical block was also developed to perform the most common statistical ...

  16. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.

  17. Normal isometric strength of rotator cuff muscles in adults

    OpenAIRE

    Chezar, A.; Berkovitch, Y.; Haddad, M.; Keren, Y.; Soudry, M.; Rosenberg, N.

    2013-01-01

    Objectives The most prevalent disorders of the shoulder are related to the muscles of rotator cuff. In order to develop a mechanical method for the evaluation of the rotator cuff muscles, we created a database of isometric force generation by the rotator cuff muscles in normal adult population. We hypothesised the existence of variations according to age, gender and dominancy of limb. Methods A total of 400 healthy adult volunteers were tested, classified into groups of 50 men and women for e...

  18. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  19. MISTRAL V1.1.1: assessing doses from atmospheric releases in normal and off-normal conditions

    International Nuclear Information System (INIS)

    David Kerouanton; Patrick Devin; Malvina Rennesson

    2006-01-01

    Protecting the environment and the public from radioactive and chemical hazards has always been a top priority for all companies operating in the nuclear domain. In this scope, SGN provides all the services the nuclear industry needs in environmental studies especially in relation to the impact assessment in normal operating conditions and risk assessment in off-normal conditions. In order to quantify dose impact on members of the public due to atmospheric releases, COGEMA and SGN developed MISTRAL V1.1.1 code. Dose impact depends strongly on dispersion of radionuclides in atmosphere. The main parameters involved in dispersion characterization are wind velocity and direction, rain, diffusion conditions, coordinates of the point of observation and stack elevation. MISTRAL code implements DOURY and PASQUILL Gaussian plume models which are widely used in the scientific community. These models, applicable for distances of transfer ranging from 100 m up to 30 km, are used to calculate atmospheric concentration and deposit at different distances from the point of release. MISTRAL allows the use of different dose regulations or dose coefficient databases such as: - ICRP30 and ICPR71 for internal doses (inhalation, ingestion) - Despres/Kocher database or US-EPA Federal Guidance no.12 (ICPR72 for noble gases) for external exposure (from plume or ground). The initial instant of the release can be considered as the origin of time or a date format can be specified (could be useful in a crisis context). While the context is specified, the user define the meteorological conditions of the release. In normal operating mode (routine releases), the user gives the annual meteorological scheme. The data can be recorded in the MISTRAL meteorological database. In off-normal conditions mode, MISTRAL V1.1 allows the use of successive release stages for which the user gives the duration, the meteorological conditions, that is to say stability class, wind speed and direction and rainfall

  20. Lessons Learned from resolving massive IPS database change for SPADES+

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin-Soo [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT{sub I}D is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT{sub I}Ds, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT{sub I}Ds are in IPS database. Changing POINT{sub I}Ds could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database.

  1. Inleiding database-systemen

    NARCIS (Netherlands)

    Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.

    1993-01-01

    Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex

  2. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  3. Radiation protection databases of nuclear safety regulatory authority

    International Nuclear Information System (INIS)

    Janzekovic, H.; Vokal, B.; Krizman, M.

    2003-01-01

    Radiation protection and nuclear safety of nuclear installations have a common objective, protection against ionising radiation. The operational safety of a nuclear power plant is evaluated using performance indicators as for instance collective radiation exposure, unit capability factor, unplanned capability loss factor, etc. As stated by WANO (World Association of Nuclear Operators) the performance indicators are 'a management tool so each operator can monitor its own performance and progress, set challenging goals for improvement and consistently compare performance with that of other plants or industry'. In order to make the analysis of the performance indicators feasible to an operator as well as to regulatory authorities a suitable database should be created based on the data related to a facility or facilities. Moreover, the international bodies found out that the comparison of radiation protection in nuclear facilities in different countries could be feasible only if the databases with well defined parameters are established. The article will briefly describe the development of international databases regarding radiation protection related to nuclear facilities. The issues related to the possible development of the efficient radiation protection control of a nuclear facility based on experience of the Slovenian Nuclear Safety Administration will be presented. (author)

  4. Extracting reaction networks from databases-opening Pandora's box.

    Science.gov (United States)

    Fearnley, Liam G; Davis, Melissa J; Ragan, Mark A; Nielsen, Lars K

    2014-11-01

    Large quantities of information describing the mechanisms of biological pathways continue to be collected in publicly available databases. At the same time, experiments have increased in scale, and biologists increasingly use pathways defined in online databases to interpret the results of experiments and generate hypotheses. Emerging computational techniques that exploit the rich biological information captured in reaction systems require formal standardized descriptions of pathways to extract these reaction networks and avoid the alternative: time-consuming and largely manual literature-based network reconstruction. Here, we systematically evaluate the effects of commonly used knowledge representations on the seemingly simple task of extracting a reaction network describing signal transduction from a pathway database. We show that this process is in fact surprisingly difficult, and the pathway representations adopted by various knowledge bases have dramatic consequences for reaction network extraction, connectivity, capture of pathway crosstalk and in the modelling of cell-cell interactions. Researchers constructing computational models built from automatically extracted reaction networks must therefore consider the issues we outline in this review to maximize the value of existing pathway knowledge. © The Author 2013. Published by Oxford University Press.

  5. TRY – a global database of plant traits

    DEFF Research Database (Denmark)

    Kattge, J.; Diaz, S.; Lavorel, S.

    2011-01-01

    species richness to ecosystem functional diversity. Trait data thus represent the raw material for a wide range of research from evolutionary biology, community and functional ecology to biogeography. Here we present the global database initiative named TRY, which has united a wide range of the plant...... trait research community worldwide and gained an unprecedented buy‐in of trait data: so far 93 trait databases have been contributed. The data repository currently contains almost three million trait entries for 69 000 out of the world's 300 000 plant species, with a focus on 52 groups of traits...... is between species (interspecific), but significant intraspecific variation is also documented, up to 40% of the overall variation. Plant functional types (PFTs), as commonly used in vegetation models, capture a substantial fraction of the observed variation – but for several traits most variation occurs...

  6. Reference values of MRI measurements of the common bile duct and pancreatic duct in children

    Energy Technology Data Exchange (ETDEWEB)

    Gwal, Kriti; Bedoya, Maria A.; Patel, Neal; Darge, Kassa; Anupindi, Sudha A. [University of Pennsylvania Perelman School of Medicine, Department of Radiology, The Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Rambhatla, Siri J. [Beth Israel Medical Center, Department of Pediatrics, Newark, NJ (United States); Sreedharan, Ram R. [University of Pennsylvania, Departments of Gastroenterology, Hepatology and Nutrition, The Children' s Hospital of Philadelphia, Perelman School of Medicine, Philadelphia, PA (United States)

    2015-08-15

    Magnetic resonance imaging/cholangiopancreatography (MRI/MRCP) is now an essential imaging modality for the evaluation of biliary and pancreatic pathology in children, but there are no data depicting the normal diameters of the common bile duct (CBD) and pancreatic duct. Recognition of abnormal duct size is important and the increasing use of MRCP necessitates normal MRI measurements. To present normal MRI measurements for the common bile duct and pancreatic duct in children. In this retrospective study we searched all children ages birth to 10 years in our MR urography (MRU) database from 2006 until 2013. We excluded children with a history of hepatobiliary or pancreatic surgery. We stratified 204 children into five age groups and retrospectively measured the CBD and the pancreatic duct on 2-D axial and 3-D coronal T2-weighted sequences. We performed statistical analysis, using logistic and linear regressions to detect the age association of the visibility and size of the duct measurements. We used non-parametric tests to detect gender and imaging plane differences. Our study included 204 children, 106 (52%) boys and 98 (48%) girls, with a median age of 33 months (range 0-119 months). The children were distributed into five age groups. The common bile duct was visible in all children in all age groups. The pancreatic duct was significantly less visible in the youngest children, group 1 (54/67, 80.5%; P = 0.003) than in the oldest children, group 5 (22/22, 100%). In group 2 the pancreatic duct was seen in 19/21 (90.4%), in group 3 52/55 (94.5%), and in group 4 39/39 (100%). All duct measurements increased with age (P < 0.001; r-value > 0.423), and the incremental differences between ages were significant. The measurement variations between the axial and coronal planes were statistically significant (P < 0.001); however these differences were fractions of millimeters. For example, in group 1 the mean coronal measurement of the CBD was 2.1 mm and the axial

  7. Reference values of MRI measurements of the common bile duct and pancreatic duct in children

    International Nuclear Information System (INIS)

    Gwal, Kriti; Bedoya, Maria A.; Patel, Neal; Darge, Kassa; Anupindi, Sudha A.; Rambhatla, Siri J.; Sreedharan, Ram R.

    2015-01-01

    Magnetic resonance imaging/cholangiopancreatography (MRI/MRCP) is now an essential imaging modality for the evaluation of biliary and pancreatic pathology in children, but there are no data depicting the normal diameters of the common bile duct (CBD) and pancreatic duct. Recognition of abnormal duct size is important and the increasing use of MRCP necessitates normal MRI measurements. To present normal MRI measurements for the common bile duct and pancreatic duct in children. In this retrospective study we searched all children ages birth to 10 years in our MR urography (MRU) database from 2006 until 2013. We excluded children with a history of hepatobiliary or pancreatic surgery. We stratified 204 children into five age groups and retrospectively measured the CBD and the pancreatic duct on 2-D axial and 3-D coronal T2-weighted sequences. We performed statistical analysis, using logistic and linear regressions to detect the age association of the visibility and size of the duct measurements. We used non-parametric tests to detect gender and imaging plane differences. Our study included 204 children, 106 (52%) boys and 98 (48%) girls, with a median age of 33 months (range 0-119 months). The children were distributed into five age groups. The common bile duct was visible in all children in all age groups. The pancreatic duct was significantly less visible in the youngest children, group 1 (54/67, 80.5%; P = 0.003) than in the oldest children, group 5 (22/22, 100%). In group 2 the pancreatic duct was seen in 19/21 (90.4%), in group 3 52/55 (94.5%), and in group 4 39/39 (100%). All duct measurements increased with age (P < 0.001; r-value > 0.423), and the incremental differences between ages were significant. The measurement variations between the axial and coronal planes were statistically significant (P < 0.001); however these differences were fractions of millimeters. For example, in group 1 the mean coronal measurement of the CBD was 2.1 mm and the axial

  8. Normalization as a canonical neural computation

    Science.gov (United States)

    Carandini, Matteo; Heeger, David J.

    2012-01-01

    There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672

  9. Estimation of common cause failure parameters for diesel generators

    International Nuclear Information System (INIS)

    Tirira, J.; Lanore, J.M.

    2002-10-01

    This paper presents a summary of some results concerning the feedback analysis of French Emergency diesel generator (EDG). The database of common cause failure for EDG has been updated. The data collected covers a period of 10 years. Several latent common cause failure (CCF) events counting in tens are identified. In fact, in this number of events collected, most are potential CCF. From events identified, 15% events are characterized as complete CCF. The database is organised following the structure proposed by 'International Common Cause Data Exchange' (ICDE project). Events collected are analyzed by failure mode and degree of failure. Qualitative analysis of root causes, coupling factors and corrective actions are studied. The exercise of quantitative analysis is in progress for evaluating CCF parameters taking into account the average impact vector and the rate of the independent failures. The interest of the average impact vector approach is that it makes it possible to take into account a wide experience feedback, not limited to complete CCF but including also many events related to partial or potential CCF. It has to be noted that there are no finalized quantitative conclusions yet to be drawn and analysis is in progress for evaluating diesel CCF parameters. In fact, the numerical coding CCF representation of the events uses a part of subjective analysis, which requests a complete and detailed event examination. (authors)

  10. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  11. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  12. Evaluation of a speaker identification system with and without fusion using three databases in the presence of noise and handset effects

    Science.gov (United States)

    S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.

    2017-12-01

    In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.

  13. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    Science.gov (United States)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  14. The development of specific reliability database for a Korean Nuclear Power Plant

    International Nuclear Information System (INIS)

    Park, S.K.; Park, B.L.; Kim, M.R.; Jeong, B.H.; Kwon, J.J.

    2001-01-01

    The object of this study is to develop reliability database for PSA application such as failure rate for safety related components, test and maintenance unavailability and common cause failure factors except for initiating event frequencies during the period of 10 years from 1990 to 1999. In this study we developed plant-specific reliability database for PSA (Probabilistic Safety Assessment) application and compared it with generic reliability database developed in the US such as EPRI-URD, IEEE-500, NUCLARR etc, in the component type basis. We have found that there are some general differences in the component failure rate and test and maintenance unavailability. We described the characteristics of differences for some important component types. We also analyzed the reasons for the differences in the aspect of maintenance terms such as maintenance policy and maintenance practice. We found that maintenance terms are important factors for the numbers of plant-specific reliability database. (author)

  15. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  16. Update History of This Database - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PLACE Update History of This Database Date Update contents 2016/08/22 The contact address is...s Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - PLACE | LSDB Archive ... ... changed. 2014/10/20 The URLs of the database maintenance site and the portal site are changed. 2014/07/17 PLACE English archi

  17. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  18. UnoViS: the MedIT public unobtrusive vital signs database.

    Science.gov (United States)

    Wartzek, Tobias; Czaplik, Michael; Antink, Christoph Hoog; Eilebrecht, Benjamin; Walocha, Rafael; Leonhardt, Steffen

    2015-01-01

    While PhysioNet is a large database for standard clinical vital signs measurements, such a database does not exist for unobtrusively measured signals. This inhibits progress in the vital area of signal processing for unobtrusive medical monitoring as not everybody owns the specific measurement systems to acquire signals. Furthermore, if no common database exists, a comparison between different signal processing approaches is not possible. This gap will be closed by our UnoViS database. It contains different recordings in various scenarios ranging from a clinical study to measurements obtained while driving a car. Currently, 145 records with a total of 16.2 h of measurement data is available, which are provided as MATLAB files or in the PhysioNet WFDB file format. In its initial state, only (multichannel) capacitive ECG and unobtrusive PPG signals are, together with a reference ECG, included. All ECG signals contain annotations by a peak detector and by a medical expert. A dataset from a clinical study contains further clinical annotations. Additionally, supplementary functions are provided, which simplify the usage of the database and thus the development and evaluation of new algorithms. The development of urgently needed methods for very robust parameter extraction or robust signal fusion in view of frequent severe motion artifacts in unobtrusive monitoring is now possible with the database.

  19. Asynchronous data change notification between database server and accelerator controls system

    International Nuclear Information System (INIS)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-01-01

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  20. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  1. BAPA Database: Linking landslide occurrence with rainfall in Asturias (Spain)

    Science.gov (United States)

    Valenzuela, Pablo; José Domínguez-Cuesta, María; Jiménez-Sánchez, Montserrat

    2015-04-01

    Asturias is a region in northern Spain with a temperate and humid climate. In this region, slope instability processes are very common and often cause economic losses and, sometimes, human victims. To prevent the geological risk involved, it is of great interest to predict landslide spatial and temporal occurrence. Some previous investigations have shown the importance of rainfall as a trigger factor. Despite the high incidence of these phenomena in Asturias, there are no databases of recent and actual landslides. The BAPA Project (Base de Datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database) aims to create an inventory of slope instabilities which have occurred between 1980 and 2015. The final goal is to study in detail the relationship between rainfall and slope instabilities in Asturias, establishing precipitation thresholds and soil moisture conditions necessary to instability triggering. This work presents the database progress showing its structure divided into various fields that essentially contain information related to spatial, temporal, geomorphological and damage data.

  2. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  3. Toward Customer-Centric Organizational Science: A Common Language Effect Size Indicator for Multiple Linear Regressions and Regressions With Higher-Order Terms.

    Science.gov (United States)

    Krasikova, Dina V; Le, Huy; Bachura, Eric

    2018-01-22

    To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. The MAR databases: development and implementation of databases specific for marine metagenomics.

    Science.gov (United States)

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Update History of This Database - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us DMPD Update History of This Database Date Update contents 2010/03/29 DMPD English archive si....jp/macrophage/ ) is released. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - DMPD | LSDB Archive ...

  6. Common variants in Mendelian kidney disease genes and their association with renal function.

    Science.gov (United States)

    Parsa, Afshin; Fuchsberger, Christian; Köttgen, Anna; O'Seaghdha, Conall M; Pattaro, Cristian; de Andrade, Mariza; Chasman, Daniel I; Teumer, Alexander; Endlich, Karlhans; Olden, Matthias; Chen, Ming-Huei; Tin, Adrienne; Kim, Young J; Taliun, Daniel; Li, Man; Feitosa, Mary; Gorski, Mathias; Yang, Qiong; Hundertmark, Claudia; Foster, Meredith C; Glazer, Nicole; Isaacs, Aaron; Rao, Madhumathi; Smith, Albert V; O'Connell, Jeffrey R; Struchalin, Maksim; Tanaka, Toshiko; Li, Guo; Hwang, Shih-Jen; Atkinson, Elizabeth J; Lohman, Kurt; Cornelis, Marilyn C; Johansson, Asa; Tönjes, Anke; Dehghan, Abbas; Couraki, Vincent; Holliday, Elizabeth G; Sorice, Rossella; Kutalik, Zoltan; Lehtimäki, Terho; Esko, Tõnu; Deshmukh, Harshal; Ulivi, Sheila; Chu, Audrey Y; Murgia, Federico; Trompet, Stella; Imboden, Medea; Kollerits, Barbara; Pistis, Giorgio; Harris, Tamara B; Launer, Lenore J; Aspelund, Thor; Eiriksdottir, Gudny; Mitchell, Braxton D; Boerwinkle, Eric; Schmidt, Helena; Hofer, Edith; Hu, Frank; Demirkan, Ayse; Oostra, Ben A; Turner, Stephen T; Ding, Jingzhong; Andrews, Jeanette S; Freedman, Barry I; Giulianini, Franco; Koenig, Wolfgang; Illig, Thomas; Döring, Angela; Wichmann, H-Erich; Zgaga, Lina; Zemunik, Tatijana; Boban, Mladen; Minelli, Cosetta; Wheeler, Heather E; Igl, Wilmar; Zaboli, Ghazal; Wild, Sarah H; Wright, Alan F; Campbell, Harry; Ellinghaus, David; Nöthlings, Ute; Jacobs, Gunnar; Biffar, Reiner; Ernst, Florian; Homuth, Georg; Kroemer, Heyo K; Nauck, Matthias; Stracke, Sylvia; Völker, Uwe; Völzke, Henry; Kovacs, Peter; Stumvoll, Michael; Mägi, Reedik; Hofman, Albert; Uitterlinden, Andre G; Rivadeneira, Fernando; Aulchenko, Yurii S; Polasek, Ozren; Hastie, Nick; Vitart, Veronique; Helmer, Catherine; Wang, Jie Jin; Stengel, Bénédicte; Ruggiero, Daniela; Bergmann, Sven; Kähönen, Mika; Viikari, Jorma; Nikopensius, Tiit; Province, Michael; Colhoun, Helen; Doney, Alex; Robino, Antonietta; Krämer, Bernhard K; Portas, Laura; Ford, Ian; Buckley, Brendan M; Adam, Martin; Thun, Gian-Andri; Paulweber, Bernhard; Haun, Margot; Sala, Cinzia; Mitchell, Paul; Ciullo, Marina; Vollenweider, Peter; Raitakari, Olli; Metspalu, Andres; Palmer, Colin; Gasparini, Paolo; Pirastu, Mario; Jukema, J Wouter; Probst-Hensch, Nicole M; Kronenberg, Florian; Toniolo, Daniela; Gudnason, Vilmundur; Shuldiner, Alan R; Coresh, Josef; Schmidt, Reinhold; Ferrucci, Luigi; van Duijn, Cornelia M; Borecki, Ingrid; Kardia, Sharon L R; Liu, Yongmei; Curhan, Gary C; Rudan, Igor; Gyllensten, Ulf; Wilson, James F; Franke, Andre; Pramstaller, Peter P; Rettig, Rainer; Prokopenko, Inga; Witteman, Jacqueline; Hayward, Caroline; Ridker, Paul M; Bochud, Murielle; Heid, Iris M; Siscovick, David S; Fox, Caroline S; Kao, W Linda; Böger, Carsten A

    2013-12-01

    Many common genetic variants identified by genome-wide association studies for complex traits map to genes previously linked to rare inherited Mendelian disorders. A systematic analysis of common single-nucleotide polymorphisms (SNPs) in genes responsible for Mendelian diseases with kidney phenotypes has not been performed. We thus developed a comprehensive database of genes for Mendelian kidney conditions and evaluated the association between common genetic variants within these genes and kidney function in the general population. Using the Online Mendelian Inheritance in Man database, we identified 731 unique disease entries related to specific renal search terms and confirmed a kidney phenotype in 218 of these entries, corresponding to mutations in 258 genes. We interrogated common SNPs (minor allele frequency >5%) within these genes for association with the estimated GFR in 74,354 European-ancestry participants from the CKDGen Consortium. However, the top four candidate SNPs (rs6433115 at LRP2, rs1050700 at TSC1, rs249942 at PALB2, and rs9827843 at ROBO2) did not achieve significance in a stage 2 meta-analysis performed in 56,246 additional independent individuals, indicating that these common SNPs are not associated with estimated GFR. The effect of less common or rare variants in these genes on kidney function in the general population and disease-specific cohorts requires further research.

  7. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  8. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    Science.gov (United States)

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  9. Update History of This Database - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Update History of This Database Date Update contents 2014/10/22 The URL of the whole da...site is opened. 2003/07/18 KOME ( http://cdna01.dna.affrc.go.jp/cDNA/ ) is opened. About This Database Dat...abase Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - KOME | LSDB Archive ...

  10. Update History of This Database - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Update History of This Database Date Update contents 2016/11/30 PSCDB English archive ...site is opened. 2011/11/13 PSCDB ( http://idp1.force.cs.is.nagoya-u.ac.jp/pscdb/ ) is opened. About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - PSCDB | LSDB Archive ...

  11. Mycobacteriophage genome database.

    Science.gov (United States)

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  12. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  13. New mutations and an updated database for the patched-1 (PTCH1) gene.

    Science.gov (United States)

    Reinders, Marie G; van Hout, Antonius F; Cosgun, Betûl; Paulussen, Aimée D; Leter, Edward M; Steijlen, Peter M; Mosterd, Klara; van Geel, Michel; Gille, Johan J

    2018-05-01

    Basal cell nevus syndrome (BCNS) is an autosomal dominant disorder characterized by multiple basal cell carcinomas (BCCs), maxillary keratocysts, and cerebral calcifications. BCNS most commonly is caused by a germline mutation in the patched-1 (PTCH1) gene. PTCH1 mutations are also described in patients with holoprosencephaly. We have established a locus-specific database for the PTCH1 gene using the Leiden Open Variation Database (LOVD). We included 117 new PTCH1 variations, in addition to 331 previously published unique PTCH1 mutations. These new mutations were found in 141 patients who had a positive PTCH1 mutation analysis in either the VU University Medical Centre (VUMC) or Maastricht University Medical Centre (MUMC) between 1995 and 2015. The database contains 331 previously published unique PTCH1 mutations and 117 new PTCH1 variations. We have established a locus-specific database for the PTCH1 gene using the Leiden Open Variation Database (LOVD). The database provides an open collection for both clinicians and researchers and is accessible online at http://www.lovd.nl/PTCH1. © 2018 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.

  14. A database for TMT interface control documents

    Science.gov (United States)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  15. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  16. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  17. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  18. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    Science.gov (United States)

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  19. Update History of This Database - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SAHG Update History of This Database Date Update contents 2016/05/09 SAHG English archive si...te is opened. 2009/10 SAHG ( http://bird.cbrc.jp/sahg ) is opened. About This Database Database Description ...Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SAHG | LSDB Archive ...

  20. Update History of This Database - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RMOS Update History of This Database Date Update contents 2015/10/27 RMOS English archive si...12 RMOS (http://cdna01.dna.affrc.go.jp/RMOS/) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - RMOS | LSDB Archive ...

  1. NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases

    Directory of Open Access Journals (Sweden)

    Alberto Anguita

    2013-01-01

    Full Text Available RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.

  2. NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases

    Science.gov (United States)

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments. PMID:23984425

  3. NCBI2RDF: enabling full RDF-based access to NCBI databases.

    Science.gov (United States)

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.

  4. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  5. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.

  6. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  7. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  8. A Quantum Private Query Protocol for Enhancing both User and Database Privacy

    Science.gov (United States)

    Zhou, Yi-Hua; Bai, Xue-Wei; Li, Lei-Lei; Shi, Wei-Min; Yang, Yu-Guang

    2018-01-01

    In order to protect the privacy of query user and database, some QKD-based quantum private query (QPQ) protocols were proposed. Unfortunately some of them cannot resist internal attack from database perfectly; some others can ensure better user privacy but require a reduction of database privacy. In this paper, a novel two-way QPQ protocol is proposed to ensure the privacy of both sides of communication. In our protocol, user makes initial quantum states and derives the key bit by comparing initial quantum state and outcome state returned from database by ctrl or shift mode instead of announcing two non-orthogonal qubits as others which may leak part secret information. In this way, not only the privacy of database be ensured but also user privacy is strengthened. Furthermore, our protocol can also realize the security of loss-tolerance, cheat-sensitive, and resisting JM attack etc. Supported by National Natural Science Foundation of China under Grant Nos. U1636106, 61572053, 61472048, 61602019, 61502016; Beijing Natural Science Foundation under Grant Nos. 4152038, 4162005; Basic Research Fund of Beijing University of Technology (No. X4007999201501); The Scientific Research Common Program of Beijing Municipal Commission of Education under Grant No. KM201510005016

  9. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    Science.gov (United States)

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  10. Update History of This Database - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SSBD Update History of This Database Date Update contents 2016/07/25 SSBD English archive si...tion Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SSBD | LSDB Archive ... ...te is opened. 2013/09/03 SSBD ( http://ssbd.qbic.riken.jp/ ) is opened. About This Database Database Descrip

  11. Rhythm-based heartbeat duration normalization for atrial fibrillation detection.

    Science.gov (United States)

    Islam, Md Saiful; Ammour, Nassim; Alajlan, Naif; Aboalsamh, Hatim

    2016-05-01

    Screening of atrial fibrillation (AF) for high-risk patients including all patients aged 65 years and older is important for prevention of risk of stroke. Different technologies such as modified blood pressure monitor, single lead ECG-based finger-probe, and smart phone using plethysmogram signal have been emerging for this purpose. All these technologies use irregularity of heartbeat duration as a feature for AF detection. We have investigated a normalization method of heartbeat duration for improved AF detection. AF is an arrhythmia in which heartbeat duration generally becomes irregularly irregular. From a window of heartbeat duration, we estimate the possible rhythm of the majority of heartbeats and normalize duration of all heartbeats in the window based on the rhythm so that we can measure the irregularity of heartbeats for both AF and non-AF rhythms in the same scale. Irregularity is measured by the entropy of distribution of the normalized duration. Then we classify a window of heartbeats as AF or non-AF by thresholding the measured irregularity. The effect of this normalization is evaluated by comparing AF detection performances using duration with the normalization, without normalization, and with other existing normalizations. Sensitivity and specificity of AF detection using normalized heartbeat duration were tested on two landmark databases available online and compared with results of other methods (with/without normalization) by receiver operating characteristic (ROC) curves. ROC analysis showed that the normalization was able to improve the performance of AF detection and it was consistent for a wide range of sensitivity and specificity for use of different thresholds. Detection accuracy was also computed for equal rates of sensitivity and specificity for different methods. Using normalized heartbeat duration, we obtained 96.38% accuracy which is more than 4% improvement compared to AF detection without normalization. The proposed normalization

  12. An Internet-ready database for prospective randomized clinical trials of high-dose-rate brachytherapy for adenocarcinoma of the prostate

    International Nuclear Information System (INIS)

    Devlin, Phillip M.; Brus, Christina R.; Kazakin, Julia; Mitchell, Ronald B.; Demanes, D. Jeffrey; Edmundson, Gregory; Gribble, Michael; Gustafson, Gary S.; Kelly, Douglas A.; Linares, Luis A.; Martinez, Alvaro A.; Mate, Timothy P.; Nag, Subir; Perez, Carlos A.; Rao, Jaynath G.; Rodriguez, Rodney R.; Shasha, Daniel; Tripuraneni, Prabhakar

    2002-01-01

    Purpose: To demonstrate a new interactive Internet-ready database for prospective clinical trials in high-dose-rate (HDR) brachytherapy for prostate cancer. Methods and Materials: An Internet-ready database was created that allows common data acquisition and statistical analysis. Patient anonymity and confidentiality are preserved. These data forms include all common elements found from a survey of the databases. The forms allow the user to view patient data in a view-only or edit mode. Eight linked forms document patient data before and after receiving HDR therapy. The pretreatment forms are divided into four categories: staging, comorbid diseases, external beam radiotherapy data, and signs and symptoms. The posttreatment forms separate data by HDR implant information, HDR medications, posttreatment signs and symptoms, and follow-up data. The forms were tested for clinical usefulness. Conclusion: This Internet-based database enables the user to record and later analyze all relevant medical data and may become a reliable instrument for the follow-up of patients and evaluation of treatment results

  13. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  14. CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units

    Directory of Open Access Journals (Sweden)

    Maskell Douglas L

    2009-05-01

    Full Text Available Abstract Background The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware. Findings Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS. Conclusion CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  15. Progress on statistical learning systems as data mining tools for the creation of automatic databases in Fusion environments

    International Nuclear Information System (INIS)

    Vega, J.; Murari, A.; Ratta, G.A.; Gonzalez, S.; Dormido-Canto, S.

    2010-01-01

    Nowadays, processing all information of a fusion database is a much more important issue than acquiring more data. Although typically fusion devices produce tens of thousands of discharges, specialized databases for physics studies are normally limited to a few tens of shots. This is due to the fact that these databases are almost always generated manually, which is a very time consuming and unreliable activity. The development of automatic methods to create specialized databases ensures first, the reduction of human efforts to identify and locate physical events, second, the standardization of criteria (reducing the vulnerability to human errors) and, third, the improvement of statistical relevance. Classification and regression techniques have been used for these purposes. The objective has been the automatic recognition of physical events (that can appear in a random and/or infrequent way) in waveforms and video-movies. Results are shown for the JET database.

  16. Handling data redundancy and update anomalies in fuzzy relational databases

    International Nuclear Information System (INIS)

    Chen, G.; Kerre, E.E.

    1996-01-01

    This paper discusses various data redundancy and update anomaly problems that may occur with fuzzy relational databases. In coping with these problems to avoid undesirable consequences when fuzzy databases are updated via data insertion, deletion and modification, a number of fuzzy normal forms (e.g., F1NF, 0-F2NF, 0-F3NF, 0-FBCNF) are used to guide the design of relation schemes such that partial and transitive fuzzy functional dependencies (FFDs) between relation attributes are restricted. Based upon FFDs and related concepts, particular attention is paid to 0-F3NF and 0-FBCNF, and to the corresponding decomposition algorithms. These algorithms not only produce relation schemes which are either in 0-F3NF or in 0-FBCNF, but also guarantee that the information (data content and FFDs) with original schemes can be recovered with those resultant schemes

  17. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  18. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    Science.gov (United States)

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. A Web-based database for pathology faculty effort reporting.

    Science.gov (United States)

    Dee, Fred R; Haugen, Thomas H; Wynn, Philip A; Leaven, Timothy C; Kemp, John D; Cohen, Michael B

    2008-04-01

    To ensure appropriate mission-based budgeting and equitable distribution of funds for faculty salaries, our compensation committee developed a pathology-specific effort reporting database. Principles included the following: (1) measurement should be done by web-based databases; (2) most entry should be done by departmental administration or be relational to other databases; (3) data entry categories should be aligned with funding streams; and (4) units of effort should be equal across categories of effort (service, teaching, research). MySQL was used for all data transactions (http://dev.mysql.com/downloads), and scripts were constructed using PERL (http://www.perl.org). Data are accessed with forms that correspond to fields in the database. The committee's work resulted in a novel database using pathology value units (PVUs) as a standard quantitative measure of effort for activities in an academic pathology department. The most common calculation was to estimate the number of hours required for a specific task, divide by 2080 hours (a Medicare year) and then multiply by 100. Other methods included assigning a baseline PVU for program, laboratory, or course directorship with an increment for each student or staff in that unit. With these methods, a faculty member should acquire approximately 100 PVUs. Some outcomes include (1) plotting PVUs versus salary to identify outliers for salary correction, (2) quantifying effort in activities outside the department, (3) documenting salary expenditure for unfunded research, (4) evaluating salary equity by plotting PVUs versus salary by sex, and (5) aggregating data by category of effort for mission-based budgeting and long-term planning.

  20. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  1. The FH mutation database: an online database of fumarate hydratase mutations involved in the MCUL (HLRCC tumor syndrome and congenital fumarase deficiency

    Directory of Open Access Journals (Sweden)

    Tomlinson Ian PM

    2008-03-01

    Full Text Available Abstract Background Fumarate hydratase (HGNC approved gene symbol – FH, also known as fumarase, is an enzyme of the tricarboxylic acid (TCA cycle, involved in fundamental cellular energy production. First described by Zinn et al in 1986, deficiency of FH results in early onset, severe encephalopathy. In 2002, the Multiple Leiomyoma Consortium identified heterozygous germline mutations of FH in patients with multiple cutaneous and uterine leiomyomas, (MCUL: OMIM 150800. In some families renal cell cancer also forms a component of the complex and as such has been described as hereditary leiomyomatosis and renal cell cancer (HLRCC: OMIM 605839. The identification of FH as a tumor suppressor was an unexpected finding and following the identification of subunits of succinate dehydrogenase in 2000 and 2001, was only the second description of the involvement of an enzyme of intermediary metabolism in tumorigenesis. Description The FH mutation database is a part of the TCA cycle gene mutation database (formerly the succinate dehydrogenase gene mutation database and is based on the Leiden Open (source Variation Database (LOVD system. The variants included in the database were derived from the published literature and annotated to conform to current mutation nomenclature. The FH database applies HGVS nomenclature guidelines, and will assist researchers in applying these guidelines when directly submitting new sequence variants online. Since the first molecular characterization of an FH mutation by Bourgeron et al in 1994, a series of reports of both FH deficiency patients and patients with MCUL/HLRRC have described 107 variants, of which 93 are thought to be pathogenic. The most common type of mutation is missense (57%, followed by frameshifts & nonsense (27%, and diverse deletions, insertions and duplications. Here we introduce an online database detailing all reported FH sequence variants. Conclusion The FH mutation database strives to systematically

  2. Time-critical Database Condition Data Handling in the CMS Experiment During the First Data Taking Period

    International Nuclear Information System (INIS)

    Cavallari, Francesca; Gruttola, Michele de; Di Guida, Salvatore; Innocente, Vincenzo; Pfeiffer, Andreas; Govi, Giacomo; Pierro, Antonio

    2011-01-01

    Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are 'dropped' by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications. In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.

  3. Validation of endogenous normalizing genes for expression analyses in adult human testis and germ cell neoplasms

    DEFF Research Database (Denmark)

    Svingen, T; Jørgensen, Anne; Rajpert-De Meyts, E

    2014-01-01

    to define suitable normalizing genes for specific cells and tissues. Here, we report on the performance of a panel of nine commonly employed normalizing genes in adult human testis and testicular pathologies. Our analyses revealed significant variability in transcript abundance for commonly used normalizers......, highlighting the importance of selecting appropriate normalizing genes as comparative measurements can yield variable results when different normalizing genes are employed. Based on our results, we recommend using RPS20, RPS29 or SRSF4 when analysing relative gene expression levels in human testis...... and associated testicular pathologies. OCT4 and SALL4 can be used with caution as second-tier normalizers when determining changes in gene expression in germ cells and germ cell tumour components, but the relative transcript abundance appears variable between different germ cell tumour types. We further...

  4. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  5. New DMSP Database of Precipitating Auroral Electrons and Ions.

    Science.gov (United States)

    Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J

    2017-08-01

    Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  6. New DMSP database of precipitating auroral electrons and ions

    Science.gov (United States)

    Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.

    2017-08-01

    Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  7. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  8. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  9. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  10. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  11. Update History of This Database - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us AT Atlas Update History of This Database Date Update contents 2013/12/16 The email address i... ( http://www.tanpaku.org/atatlas/ ) is opened. About This Database Database Description Download License Update History of This Data...base Site Policy | Contact Us Update History of This Database - AT Atlas | LSDB Archive ...

  12. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  13. Pulotu: Database of Austronesian Supernatural Beliefs and Practices.

    Science.gov (United States)

    Watts, Joseph; Sheehan, Oliver; Greenhill, Simon J; Gomes-Ng, Stephanie; Atkinson, Quentin D; Bulbulia, Joseph; Gray, Russell D

    2015-01-01

    Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton's Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island-a region covering over half the world's longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton's Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable "headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential to revolutionise the

  14. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  15. Pulotu: Database of Austronesian Supernatural Beliefs and Practices.

    Directory of Open Access Journals (Sweden)

    Joseph Watts

    Full Text Available Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton's Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island-a region covering over half the world's longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton's Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable "headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential

  16. "There's so Much Data": Exploring the Realities of Data-Based School Governance

    Science.gov (United States)

    Selwyn, Neil

    2016-01-01

    Educational governance is commonly predicated around the generation, collation and processing of data through digital technologies. Drawing upon an empirical study of two Australian secondary schools, this paper explores the different forms of data-based governance that are being enacted by school leaders, managers, administrators and teachers.…

  17. Syncope- a common challenge to medical practitioners ...

    African Journals Online (AJOL)

    Syncope is a common presentation in medical practice, and is associated with a higher than normal risk of mortality and morbidity in older individuals; It is essential that an accurate clinical history of the episode described as syncope be obtained, including the events preceding, the observations of eye-witnesses, and the ...

  18. Italian Present-day Stress Indicators: IPSI Database

    Science.gov (United States)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  19. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  20. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...